448 resultados para swd: Kinect
Resumo:
The paper describes the design and implementation of a novel low cost virtual rugby decision making interactive for use in a visitor centre. Original laboratory-based experimental work in decision making in rugby, using a virtual reality headset [1] is adapted for use in a public visitor centre, with consideration given to usability, costs, practicality and health and safety. Movement of professional rugby players was captured and animated within a virtually recreated stadium. Users then interact with these virtual representations via use of a lowcost sensor (Microsoft Kinect) to attempt to block them. Retaining the principles of perception and action, egocentric viewpoint, immersion, sense of presence, representative design and game design the system delivers an engaging and effective interactive to illustrate the underlying scientific principles of deceptive movement. User testing highlighted the need for usability, system robustness, fair and accurate scoring, appropriate level of difficulty and enjoyment.
Resumo:
Current Ambient Intelligence and Intelligent Environment research focuses on the interpretation of a subject’s behaviour at the activity level by logging the Activity of Daily Living (ADL) such as eating, cooking, etc. In general, the sensors employed (e.g. PIR sensors, contact sensors) provide low resolution information. Meanwhile, the expansion of ubiquitous computing allows researchers to gather additional information from different types of sensor which is possible to improve activity analysis. Based on the previous research about sitting posture detection, this research attempts to further analyses human sitting activity. The aim of this research is to use non-intrusive low cost pressure sensor embedded chair system to recognize a subject’s activity by using their detected postures. There are three steps for this research, the first step is to find a hardware solution for low cost sitting posture detection, second step is to find a suitable strategy of sitting posture detection and the last step is to correlate the time-ordered sitting posture sequences with sitting activity. The author initiated a prototype type of sensing system called IntelliChair for sitting posture detection. Two experiments are proceeded in order to determine the hardware architecture of IntelliChair system. The prototype looks at the sensor selection and integration of various sensor and indicates the best for a low cost, non-intrusive system. Subsequently, this research implements signal process theory to explore the frequency feature of sitting posture, for the purpose of determining a suitable sampling rate for IntelliChair system. For second and third step, ten subjects are recruited for the sitting posture data and sitting activity data collection. The former dataset is collected byasking subjects to perform certain pre-defined sitting postures on IntelliChair and it is used for posture recognition experiment. The latter dataset is collected by asking the subjects to perform their normal sitting activity routine on IntelliChair for four hours, and the dataset is used for activity modelling and recognition experiment. For the posture recognition experiment, two Support Vector Machine (SVM) based classifiers are trained (one for spine postures and the other one for leg postures), and their performance evaluated. Hidden Markov Model is utilized for sitting activity modelling and recognition in order to establish the selected sitting activities from sitting posture sequences.2. After experimenting with possible sensors, Force Sensing Resistor (FSR) is selected as the pressure sensing unit for IntelliChair. Eight FSRs are mounted on the seat and back of a chair to gather haptic (i.e., touch-based) posture information. Furthermore, the research explores the possibility of using alternative non-intrusive sensing technology (i.e. vision based Kinect Sensor from Microsoft) and find out the Kinect sensor is not reliable for sitting posture detection due to the joint drifting problem. A suitable sampling rate for IntelliChair is determined according to the experiment result which is 6 Hz. The posture classification performance shows that the SVM based classifier is robust to “familiar” subject data (accuracy is 99.8% with spine postures and 99.9% with leg postures). When dealing with “unfamiliar” subject data, the accuracy is 80.7% for spine posture classification and 42.3% for leg posture classification. The result of activity recognition achieves 41.27% accuracy among four selected activities (i.e. relax, play game, working with PC and watching video). The result of this thesis shows that different individual body characteristics and sitting habits influence both sitting posture and sitting activity recognition. In this case, it suggests that IntelliChair is suitable for individual usage but a training stage is required.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
O bin picking é um processo de grande interesse na indústria, uma vez que permite maior automatização, aumento da capacidade de produção e redução dos custos. Este tem vindo a evoluir bastante ao longo dos anos e essa evolução fez com que sistemas de perceção 3D começassem a ser implementados. Este trabalho tem como principal objetivo desenvolver um sistema de bin picking usando apenas perceção 3D. O sistema deve ser capaz de determinar a posição e orientação de objetos com diferentes formas e tamanhos, posicionados aleatoriamente numa superfície de trabalho. Os objetos utilizados para fazer os testes experimentais, são esferas, cilindros e prismas, uma vez que abrangem as formas geométricas existentes em muitos produtos submetidos a bin picking. Após a identi cação e seleção do objeto a apanhar, o manipulador deve autonomamente posicionar-se para fazer a aproximação e recolha do mesmo. A aquisição de dados é feita através de uma câmara Kinect. Dos dados recebidos apenas são trabalhados os referentes à profundidade, centrando-se assim este trabalho na análise e tratamento de nuvem de pontos. O sistema desenvolvido cumpre com os objetivos estabelecidos. Consegue localizar e apanhar objetos em várias posições e orientações. Além disso apresenta uma velocidade de processamento compatível com a aplicação em causa.
Resumo:
Physical places are given contextual meaning by the objects and people that make up the space. Presence in physical places can be utilised to support mobile interaction by making access to media and notifications on a smartphone easier and more visible to other people. Smartphone interfaces can be extended into the physical world in a meaningful way by anchoring digital content to artefacts, and interactions situated around physical artefacts can provide contextual meaning to private manipulations with a mobile device. Additionally, places themselves are designed to support a set of tasks, and the logical structure of places can be used to organise content on the smartphone. Menus that adapt the functionality of a smartphone can support the user by presenting the tools most likely to be needed just-in-time, so that information needs can be satisfied quickly and with little cognitive effort. Furthermore, places are often shared with people whom the user knows, and the smartphone can facilitate social situations by providing access to content that stimulates conversation. However, the smartphone can disrupt a collaborative environment, by alerting the user with unimportant notifications, or sucking the user in to the digital world with attractive content that is only shown on a private screen. Sharing smartphone content on a situated display creates an inclusive and unobtrusive user experience, and can increase focus on a primary task by allowing content to be read at a glance. Mobile interaction situated around artefacts of personal places is investigated as a way to support users to access content from their smartphone while managing their physical presence. A menu that adapts to personal places is evaluated to reduce the time and effort of app navigation, and coordinating smartphone content on a situated display is found to support social engagement and the negotiation of notifications. Improving the sensing of smartphone users in places is a challenge that is out-with the scope of this thesis. Instead, interaction designers and developers should be provided with low-cost positioning tools that utilise presence in places, and enable quantitative and qualitative data to be collected in user evaluations. Two lightweight positioning tools are developed with the low-cost sensors that are currently available: The Microsoft Kinect depth sensor allows movements of a smartphone user to be tracked in a limited area of a place, and Bluetooth beacons enable the larger context of a place to be detected. Positioning experiments with each sensor are performed to highlight the capabilities and limitations of current sensing techniques for designing interactions with a smartphone. Both tools enable prototypes to be built with a rapid prototyping approach, and mobile interactions can be tested with more advanced sensing techniques as they become available. Sensing technologies are becoming pervasive, and it will soon be possible to perform reliable place detection in-the-wild. Novel interactions that utilise presence in places can support smartphone users by making access to useful functionality easy and more visible to the people who matter most in everyday life.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
Los sensores de propósito general RGB-D son dispositivos capaces de proporcionar información de color y de profundidad de la escena. Debido al amplio rango de aplicación que tienen estos sensores, despiertan gran interés en múltiples áreas, provocando que en algunos casos funcionen al límite de sensibilidad. Los métodos de calibración resultan más importantes, si cabe, para este tipo de sensores para mejorar la precisión de los datos adquiridos. Por esta razón, resulta de enorme transcendencia analizar y estudiar el calibrado de estos sensores RGBD de propósito general. En este trabajo se ha realizado un estudio de las diferentes tecnologías empleadas para determinar la profundidad, siendo la luz estructurada y el tiempo de vuelo las más comunes. Además, se ha analizado y estudiado aquellos parámetros del sensor que influyen en la obtención de los datos con precisión adecuada dependiendo del problema a tratar. El calibrado determina, como primer elemento del proceso de visión, los parámetros característicos que definen un sistema de visión artificial, en este caso, aquellos que permiten mejorar la exactitud y precisión de los datos aportados. En este trabajo se han analizado tres algoritmos de calibración, tanto de propósito general como de propósito específico, para llevar a cabo el proceso de calibrado de tres sensores ampliamente utilizados: Microsoft Kinect, PrimeSense Carmine 1.09 y Microsoft Kinect v2. Los dos primeros utilizan la tecnología de luz estructurada para determinar la profundidad, mientras que el tercero utiliza tiempo de vuelo. La experimentación realizada permite determinar de manera cuantitativa la exactitud y la precisión de los sensores y su mejora durante el proceso de calibrado, aportando los mejores resultados para cada caso. Finalmente, y con el objetivo de mostrar el proceso de calibrado en un sistema de registro global, diferentes pruebas han sido realizadas con el método de registro µ-MAR. Se ha utilizado inspección visual para determinar el comportamiento de los datos de captura corregidos según los resultados de los diferentes algoritmos de calibrado. Este hecho permite observar la importancia de disponer de datos exactos para ciertas aplicaciones como el registro 3D de una escena.
Resumo:
Dissertação de Mestrado, Engenharia Elétrica e Eletrónica, Especialização em Sistemas de Energia e Controlo, Instituto Superior de Engenharia, Universidade do Algarve, 2015
Resumo:
Dissertação de Mestrado, Engenharia Elétrica e Eletrónica, Instituto Superior de Engenharia, Universidade do Algarve, 2015
Resumo:
Ce travail présente deux nouveaux systèmes simples d'analyse de la marche humaine grâce à une caméra de profondeur (Microsoft Kinect) placée devant un sujet marchant sur un tapis roulant conventionnel, capables de détecter une marche saine et celle déficiente. Le premier système repose sur le fait qu'une marche normale présente typiquement un signal de profondeur lisse au niveau de chaque pixel avec moins de hautes fréquences, ce qui permet d'estimer une carte indiquant l'emplacement et l'amplitude de l'énergie de haute fréquence (HFSE). Le second système analyse les parties du corps qui ont un motif de mouvement irrégulier, en termes de périodicité, lors de la marche. Nous supposons que la marche d'un sujet sain présente partout dans le corps, pendant les cycles de marche, un signal de profondeur avec un motif périodique sans bruit. Nous estimons, à partir de la séquence vidéo de chaque sujet, une carte montrant les zones d'irrégularités de la marche (également appelées énergie de bruit apériodique). La carte avec HFSE ou celle visualisant l'énergie de bruit apériodique peut être utilisée comme un bon indicateur d'une éventuelle pathologie, dans un outil de diagnostic précoce, rapide et fiable, ou permettre de fournir des informations sur la présence et l'étendue de la maladie ou des problèmes (orthopédiques, musculaires ou neurologiques) du patient. Même si les cartes obtenues sont informatives et très discriminantes pour une classification visuelle directe, même pour un non-spécialiste, les systèmes proposés permettent de détecter automatiquement les individus en bonne santé et ceux avec des problèmes locomoteurs.
Resumo:
Ce travail présente deux nouveaux systèmes simples d'analyse de la marche humaine grâce à une caméra de profondeur (Microsoft Kinect) placée devant un sujet marchant sur un tapis roulant conventionnel, capables de détecter une marche saine et celle déficiente. Le premier système repose sur le fait qu'une marche normale présente typiquement un signal de profondeur lisse au niveau de chaque pixel avec moins de hautes fréquences, ce qui permet d'estimer une carte indiquant l'emplacement et l'amplitude de l'énergie de haute fréquence (HFSE). Le second système analyse les parties du corps qui ont un motif de mouvement irrégulier, en termes de périodicité, lors de la marche. Nous supposons que la marche d'un sujet sain présente partout dans le corps, pendant les cycles de marche, un signal de profondeur avec un motif périodique sans bruit. Nous estimons, à partir de la séquence vidéo de chaque sujet, une carte montrant les zones d'irrégularités de la marche (également appelées énergie de bruit apériodique). La carte avec HFSE ou celle visualisant l'énergie de bruit apériodique peut être utilisée comme un bon indicateur d'une éventuelle pathologie, dans un outil de diagnostic précoce, rapide et fiable, ou permettre de fournir des informations sur la présence et l'étendue de la maladie ou des problèmes (orthopédiques, musculaires ou neurologiques) du patient. Même si les cartes obtenues sont informatives et très discriminantes pour une classification visuelle directe, même pour un non-spécialiste, les systèmes proposés permettent de détecter automatiquement les individus en bonne santé et ceux avec des problèmes locomoteurs.
Resumo:
A interação homem-máquina tem evoluído significativamente nos últimos anos, a ponto de permitir desenvolver soluções adequadas para apoio a pessoas que possuem um certo tipo de limitação física ou cognitiva. O desenvolvimento de técnicas naturais e intuitivas de interação, as chamadas Natural User Interface (NUI), permitem, hoje, que pessoas que estejam acamadas e/ou com incapacidade motora possam executar um conjunto de ações por intermédio de gestos, aumentando assim a sua qualidade de vida. A solução implementada neste projecto é baseada em processamento de imagem e visão por computador através do sensor 3D Kinect e consiste numa interface natural para o desenvolvimento de uma aplicação que reconheça gestos efetuados por uma mão humana. Os gestos identificados pela aplicação acionam um conjunto de ações adequados a uma pessoa acamada, como, por exemplo, acionar a emergência, ligar ou desligar a TV ou controlar a inclinação da cama. O processo de desenvolvimento deste projeto implicou várias etapas. Inicialmente houve um trabalho intenso de investigação sobre as técnicas e tecnologias consideradas importantes para a realização do trabalho - a etapa de investigação, a qual acompanhou praticamente todo o processo. A segunda etapa consistiu na configuração do sistema ao nível do hardware e do software. Após a configuração do sistema, obtiveram-se os primeiros dados do sensor 3D Kinect, os quais foram convertidos num formato mais apropriado ao seu posterior tratamento. A segmentação da mão permitiu posteriormente o reconhecimento de gestos através da técnica de matching para os seis gestos implementados. Os resultados obtidos são satisfatórios, tendo-se contabilizado cerca de 96% de resultados válidos. A área da saúde e bem-estar tem necessidade de aplicações que melhorem a qualidade de vida de pessoas acamadas, nesse sentido, o protótipo desenvolvido faz todo o sentido na sociedade actual, onde se verifica o envelhecimento da população.
Resumo:
Drosophila suzukii (Diptera: Drosophilidae), conhecida como drosófila da asa manchada (SWD) ou suzuki, é uma praga quarentenária nativa da Ásia em expansão mundial na atualidade. Em 2008, SWD foi coletada nos EUA (Califórnia) e, desde então, registrada em outros estados americanos (WALSH et al. 2011) e também na Europa (CINI et al. 2012). No Brasil, a praga foi detectada no ano de 2014 ocasionando danos na ordem de 30% em cultivos de morango no Estado do Rio Grande do Sul (SANTOS, 2014a). Os danos são causados pela alimentação das larvas em frutos ainda fixos às plantas, e pela introdução de patógenos no local da oviposição. O fruto atacado entra em colapso exibindo intensa perda de líquidos. Entre os hospedeiros da SWD estão as fruteiras que produzem frutos de epiderme fina como, por exemplo, os pequenos frutos: morango, framboesa, amora-preta e o mirtilo. Em se tratando de uma espécie recentemente introduzida no Brasil, poucas são as informações sobre a eficiência de atrativos para monitoramento das populações. O vinagre de maçã tem sido usado em vários estudos científicos, sendo até sugerido como atrativo para o monitoramento da espécie no Brasil (SANTOS, 2014b). Apesar disto, a atratividade é apontada como de curta duração e de baixa seletividade. Assim, Santos (2016) recomenda, em substituição ao vinagre de maçã, o uso de um atrativo à base de fermento biológico, açúcar e água, o qual tem se mostrado promissor e seletivo para monitoramento de SWD. Nos 24 EUA, após extensa avaliação laboratorial e de campo, foram isolados componentes químicos essenciais da atratividade de D. Suzukii, os quais estão sendo produzidos e comercializados em forma de dispenser, com os nomes comerciais de Pherocon® SWD e Scentry® SWD. Como inexistem informações sobre a eficiência e a seletividade de tais produtos para o monitoramento da suzuki no Brasil, foi planejado o presente estudo, cujo objetivo foi o de avaliar a captura e a seletividade de atrativos e de misturas no monitoramento de D. suzukii em pomar de framboesa no município de Vacaria, RS.