804 resultados para Depth sensor
Resumo:
Physical places are given contextual meaning by the objects and people that make up the space. Presence in physical places can be utilised to support mobile interaction by making access to media and notifications on a smartphone easier and more visible to other people. Smartphone interfaces can be extended into the physical world in a meaningful way by anchoring digital content to artefacts, and interactions situated around physical artefacts can provide contextual meaning to private manipulations with a mobile device. Additionally, places themselves are designed to support a set of tasks, and the logical structure of places can be used to organise content on the smartphone. Menus that adapt the functionality of a smartphone can support the user by presenting the tools most likely to be needed just-in-time, so that information needs can be satisfied quickly and with little cognitive effort. Furthermore, places are often shared with people whom the user knows, and the smartphone can facilitate social situations by providing access to content that stimulates conversation. However, the smartphone can disrupt a collaborative environment, by alerting the user with unimportant notifications, or sucking the user in to the digital world with attractive content that is only shown on a private screen. Sharing smartphone content on a situated display creates an inclusive and unobtrusive user experience, and can increase focus on a primary task by allowing content to be read at a glance. Mobile interaction situated around artefacts of personal places is investigated as a way to support users to access content from their smartphone while managing their physical presence. A menu that adapts to personal places is evaluated to reduce the time and effort of app navigation, and coordinating smartphone content on a situated display is found to support social engagement and the negotiation of notifications. Improving the sensing of smartphone users in places is a challenge that is out-with the scope of this thesis. Instead, interaction designers and developers should be provided with low-cost positioning tools that utilise presence in places, and enable quantitative and qualitative data to be collected in user evaluations. Two lightweight positioning tools are developed with the low-cost sensors that are currently available: The Microsoft Kinect depth sensor allows movements of a smartphone user to be tracked in a limited area of a place, and Bluetooth beacons enable the larger context of a place to be detected. Positioning experiments with each sensor are performed to highlight the capabilities and limitations of current sensing techniques for designing interactions with a smartphone. Both tools enable prototypes to be built with a rapid prototyping approach, and mobile interactions can be tested with more advanced sensing techniques as they become available. Sensing technologies are becoming pervasive, and it will soon be possible to perform reliable place detection in-the-wild. Novel interactions that utilise presence in places can support smartphone users by making access to useful functionality easy and more visible to the people who matter most in everyday life.
Resumo:
This thesis proposes a generic visual perception architecture for robotic clothes perception and manipulation. This proposed architecture is fully integrated with a stereo vision system and a dual-arm robot and is able to perform a number of autonomous laundering tasks. Clothes perception and manipulation is a novel research topic in robotics and has experienced rapid development in recent years. Compared to the task of perceiving and manipulating rigid objects, clothes perception and manipulation poses a greater challenge. This can be attributed to two reasons: firstly, deformable clothing requires precise (high-acuity) visual perception and dexterous manipulation; secondly, as clothing approximates a non-rigid 2-manifold in 3-space, that can adopt a quasi-infinite configuration space, the potential variability in the appearance of clothing items makes them difficult to understand, identify uniquely, and interact with by machine. From an applications perspective, and as part of EU CloPeMa project, the integrated visual perception architecture refines a pre-existing clothing manipulation pipeline by completing pre-wash clothes (category) sorting (using single-shot or interactive perception for garment categorisation and manipulation) and post-wash dual-arm flattening. To the best of the author’s knowledge, as investigated in this thesis, the autonomous clothing perception and manipulation solutions presented here were first proposed and reported by the author. All of the reported robot demonstrations in this work follow a perception-manipulation method- ology where visual and tactile feedback (in the form of surface wrinkledness captured by the high accuracy depth sensor i.e. CloPeMa stereo head or the predictive confidence modelled by Gaussian Processing) serve as the halting criteria in the flattening and sorting tasks, respectively. From scientific perspective, the proposed visual perception architecture addresses the above challenges by parsing and grouping 3D clothing configurations hierarchically from low-level curvatures, through mid-level surface shape representations (providing topological descriptions and 3D texture representations), to high-level semantic structures and statistical descriptions. A range of visual features such as Shape Index, Surface Topologies Analysis and Local Binary Patterns have been adapted within this work to parse clothing surfaces and textures and several novel features have been devised, including B-Spline Patches with Locality-Constrained Linear coding, and Topology Spatial Distance to describe and quantify generic landmarks (wrinkles and folds). The essence of this proposed architecture comprises 3D generic surface parsing and interpretation, which is critical to underpinning a number of laundering tasks and has the potential to be extended to other rigid and non-rigid object perception and manipulation tasks. The experimental results presented in this thesis demonstrate that: firstly, the proposed grasp- ing approach achieves on-average 84.7% accuracy; secondly, the proposed flattening approach is able to flatten towels, t-shirts and pants (shorts) within 9 iterations on-average; thirdly, the proposed clothes recognition pipeline can recognise clothes categories from highly wrinkled configurations and advances the state-of-the-art by 36% in terms of classification accuracy, achieving an 83.2% true-positive classification rate when discriminating between five categories of clothes; finally the Gaussian Process based interactive perception approach exhibits a substantial improvement over single-shot perception. Accordingly, this thesis has advanced the state-of-the-art of robot clothes perception and manipulation.
Resumo:
Ce travail présente deux nouveaux systèmes simples d'analyse de la marche humaine grâce à une caméra de profondeur (Microsoft Kinect) placée devant un sujet marchant sur un tapis roulant conventionnel, capables de détecter une marche saine et celle déficiente. Le premier système repose sur le fait qu'une marche normale présente typiquement un signal de profondeur lisse au niveau de chaque pixel avec moins de hautes fréquences, ce qui permet d'estimer une carte indiquant l'emplacement et l'amplitude de l'énergie de haute fréquence (HFSE). Le second système analyse les parties du corps qui ont un motif de mouvement irrégulier, en termes de périodicité, lors de la marche. Nous supposons que la marche d'un sujet sain présente partout dans le corps, pendant les cycles de marche, un signal de profondeur avec un motif périodique sans bruit. Nous estimons, à partir de la séquence vidéo de chaque sujet, une carte montrant les zones d'irrégularités de la marche (également appelées énergie de bruit apériodique). La carte avec HFSE ou celle visualisant l'énergie de bruit apériodique peut être utilisée comme un bon indicateur d'une éventuelle pathologie, dans un outil de diagnostic précoce, rapide et fiable, ou permettre de fournir des informations sur la présence et l'étendue de la maladie ou des problèmes (orthopédiques, musculaires ou neurologiques) du patient. Même si les cartes obtenues sont informatives et très discriminantes pour une classification visuelle directe, même pour un non-spécialiste, les systèmes proposés permettent de détecter automatiquement les individus en bonne santé et ceux avec des problèmes locomoteurs.
Resumo:
Ce travail présente deux nouveaux systèmes simples d'analyse de la marche humaine grâce à une caméra de profondeur (Microsoft Kinect) placée devant un sujet marchant sur un tapis roulant conventionnel, capables de détecter une marche saine et celle déficiente. Le premier système repose sur le fait qu'une marche normale présente typiquement un signal de profondeur lisse au niveau de chaque pixel avec moins de hautes fréquences, ce qui permet d'estimer une carte indiquant l'emplacement et l'amplitude de l'énergie de haute fréquence (HFSE). Le second système analyse les parties du corps qui ont un motif de mouvement irrégulier, en termes de périodicité, lors de la marche. Nous supposons que la marche d'un sujet sain présente partout dans le corps, pendant les cycles de marche, un signal de profondeur avec un motif périodique sans bruit. Nous estimons, à partir de la séquence vidéo de chaque sujet, une carte montrant les zones d'irrégularités de la marche (également appelées énergie de bruit apériodique). La carte avec HFSE ou celle visualisant l'énergie de bruit apériodique peut être utilisée comme un bon indicateur d'une éventuelle pathologie, dans un outil de diagnostic précoce, rapide et fiable, ou permettre de fournir des informations sur la présence et l'étendue de la maladie ou des problèmes (orthopédiques, musculaires ou neurologiques) du patient. Même si les cartes obtenues sont informatives et très discriminantes pour une classification visuelle directe, même pour un non-spécialiste, les systèmes proposés permettent de détecter automatiquement les individus en bonne santé et ceux avec des problèmes locomoteurs.
Resumo:
We report in this paper the recent advances we obtained in optimizing a color image sensor based on the laser-scanned-photodiode (LSP) technique. A novel device structure based on a a-SiC:H/a-Si:H pin/pin tandem structure has been tested for a proper color separation process that takes advantage on the different filtering properties due to the different light penetration depth at different wavelengths a-SM and a-SiC:H. While the green and the red images give, in comparison with previous tested structures, a weak response, this structure shows a very good recognition of blue color under reverse bias, leaving a good margin for future device optimization in order to achieve a complete and satisfactory RGB image mapping. Experimental results about the spectral collection efficiency are presented and discussed from the point of view of the color sensor applications. The physics behind the device functioning is explained by recurring to a numerical simulation of the internal electrical configuration of the device.
Resumo:
Time-sensitive Wireless Sensor Network (WSN) applications require finite delay bounds in critical situations. This paper provides a methodology for the modeling and the worst-case dimensioning of cluster-tree WSNs. We provide a fine model of the worst-case cluster-tree topology characterized by its depth, the maximum number of child routers and the maximum number of child nodes for each parent router. Using Network Calculus, we derive “plug-and-play” expressions for the endto- end delay bounds, buffering and bandwidth requirements as a function of the WSN cluster-tree characteristics and traffic specifications. The cluster-tree topology has been adopted by many cluster-based solutions for WSNs. We demonstrate how to apply our general results for dimensioning IEEE 802.15.4/Zigbee cluster-tree WSNs. We believe that this paper shows the fundamental performance limits of cluster-tree wireless sensor networks by the provision of a simple and effective methodology for the design of such WSNs.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Omnidirectional cameras offer a much wider field of view than the perspective ones and alleviate the problems due to occlusions. However, both types of cameras suffer from the lack of depth perception. A practical method for obtaining depth in computer vision is to project a known structured light pattern on the scene avoiding the problems and costs involved by stereo vision. This paper is focused on the idea of combining omnidirectional vision and structured light with the aim to provide 3D information about the scene. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. It is also discussed how this sensor can be used in robot navigation applications
Resumo:
We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc
Resumo:
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production
Resumo:
El reconeixement dels gestos de la mà (HGR, Hand Gesture Recognition) és actualment un camp important de recerca degut a la varietat de situacions en les quals és necessari comunicar-se mitjançant signes, com pot ser la comunicació entre persones que utilitzen la llengua de signes i les que no. En aquest projecte es presenta un mètode de reconeixement de gestos de la mà a temps real utilitzant el sensor Kinect per Microsoft Xbox, implementat en un entorn Linux (Ubuntu) amb llenguatge de programació Python i utilitzant la llibreria de visió artifical OpenCV per a processar les dades sobre un ordinador portàtil convencional. Gràcies a la capacitat del sensor Kinect de capturar dades de profunditat d’una escena es poden determinar les posicions i trajectòries dels objectes en 3 dimensions, el que implica poder realitzar una anàlisi complerta a temps real d’una imatge o d’una seqüencia d’imatges. El procediment de reconeixement que es planteja es basa en la segmentació de la imatge per poder treballar únicament amb la mà, en la detecció dels contorns, per després obtenir l’envolupant convexa i els defectes convexos, que finalment han de servir per determinar el nombre de dits i concloure en la interpretació del gest; el resultat final és la transcripció del seu significat en una finestra que serveix d’interfície amb l’interlocutor. L’aplicació permet reconèixer els números del 0 al 5, ja que s’analitza únicament una mà, alguns gestos populars i algunes de les lletres de l’alfabet dactilològic de la llengua de signes catalana. El projecte és doncs, la porta d’entrada al camp del reconeixement de gestos i la base d’un futur sistema de reconeixement de la llengua de signes capaç de transcriure tant els signes dinàmics com l’alfabet dactilològic.
Resumo:
El reconeixement dels gestos de la mà (HGR, Hand Gesture Recognition) és actualment un camp important de recerca degut a la varietat de situacions en les quals és necessari comunicar-se mitjançant signes, com pot ser la comunicació entre persones que utilitzen la llengua de signes i les que no. En aquest projecte es presenta un mètode de reconeixement de gestos de la mà a temps real utilitzant el sensor Kinect per Microsoft Xbox, implementat en un entorn Linux (Ubuntu) amb llenguatge de programació Python i utilitzant la llibreria de visió artifical OpenCV per a processar les dades sobre un ordinador portàtil convencional. Gràcies a la capacitat del sensor Kinect de capturar dades de profunditat d’una escena es poden determinar les posicions i trajectòries dels objectes en 3 dimensions, el que implica poder realitzar una anàlisi complerta a temps real d’una imatge o d’una seqüencia d’imatges. El procediment de reconeixement que es planteja es basa en la segmentació de la imatge per poder treballar únicament amb la mà, en la detecció dels contorns, per després obtenir l’envolupant convexa i els defectes convexos, que finalment han de servir per determinar el nombre de dits i concloure en la interpretació del gest; el resultat final és la transcripció del seu significat en una finestra que serveix d’interfície amb l’interlocutor. L’aplicació permet reconèixer els números del 0 al 5, ja que s’analitza únicament una mà, alguns gestos populars i algunes de les lletres de l’alfabet dactilològic de la llengua de signes catalana. El projecte és doncs, la porta d’entrada al camp del reconeixement de gestos i la base d’un futur sistema de reconeixement de la llengua de signes capaç de transcriure tant els signes dinàmics com l’alfabet dactilològic.
Resumo:
The recent emergence of low-cost RGB-D sensors has brought new opportunities for robotics by providing affordable devices that can provide synchronized images with both color and depth information. In this thesis, recent work on pose estimation utilizing RGBD sensors is reviewed. Also, a pose recognition system for rigid objects using RGB-D data is implemented. The implementation uses half-edge primitives extracted from the RGB-D images for pose estimation. The system is based on the probabilistic object representation framework by Detry et al., which utilizes Nonparametric Belief Propagation for pose inference. Experiments are performed on household objects to evaluate the performance and robustness of the system.
Resumo:
The emergence of depth sensors has made it possible to track – not only monocular cues – but also the actual depth values of the environment. This is especially useful in augmented reality solutions, where the position and orientation (pose) of the observer need to be accurately determined. This allows virtual objects to be installed to the view of the user through, for example, a screen of a tablet or augmented reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have been physically quite large, the size of these sensors is decreasing, and possibly – eventually – a 3D sensor could be embedded – for example – to augmented reality glasses. The wider subject area considered in this review is 3D SLAM methods, which take advantage of the 3D information available by modern RGB-D sensors, such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization and Mapping) and 3D tracking in augmented reality is a timely subject. We also try to find out the limitations and possibilities of different tracking methods, and how they should be improved, in order to allow efficient integration of the methods to the augmented reality solutions of the future.