2 resultados para Visual Object Identification Task
em QSpace: Queen's University - Canada
Resumo:
This paper presents a solution to part of the problem of making robotic or semi-robotic digging equipment less dependant on human supervision. A method is described for identifying rocks of a certain size that may affect digging efficiency or require special handling. The process involves three main steps. First, by using range and intensity data from a time-of-flight (TOF) camera, a feature descriptor is used to rank points and separate regions surrounding high scoring points. This allows a wide range of rocks to be recognized because features can represent a whole or just part of a rock. Second, these points are filtered to extract only points thought to belong to the large object. Finally, a check is carried out to verify that the resultant point cloud actually represents a rock. Results are presented from field testing on piles of fragmented rock. Note to Practitioners—This paper presents an algorithm to identify large boulders in a pile of broken rock as a step towards an autonomous mining dig planner. In mining, piles of broken rock can contain large fragments that may need to be specially handled. To assess rock piles for excavation, we make use of a TOF camera that does not rely on external lighting to generate a point cloud of the rock pile. We then segment large boulders from its surface by using a novel feature descriptor and distinguish between real and false boulder candidates. Preliminary field experiments show promising results with the algorithm performing nearly as well as human test subjects.
Resumo:
Loss of limb results in loss of function and a partial loss of freedom. A powered prosthetic device can partially assist an individual with everyday tasks and therefore return some level of independence. Powered upper limb prostheses are often controlled by the user generating surface electromyographic (SEMG) signals. The goal of this thesis is to develop a virtual environment in which a user can control a virtual hand to safely grasp representations of everyday objects using EMG signals from his/her forearm muscles, and experience visual and vibrotactile feedback relevant to the grasping force in the process. This can then be used to train potential wearers of real EMG controlled prostheses, with or without vibrotactile feedback. To test this system an experiment was designed and executed involving ten subjects, twelve objects, and three feedback conditions. The tested feedback conditions were visual, vibrotactile, and both visual and vibrotactile. In each experimental exercise the subject attempted to grasp a virtual object on the screen using the virtual hand controlled by EMG electrodes placed on his/her forearm. Two metrics were used: score, and time to task completion, where score measured grasp dexterity. It was hypothesized that with the introduction of vibrotactile feedback, dexterity, and therefore score, would improve and time to task completion would decrease. Results showed that time to task completion increased, and score did not improve with vibrotactile feedback. Details on the developed system, the experiment, and the results are presented in this thesis.