4 resultados para sliding contact
em Massachusetts Institute of Technology
Resumo:
This thesis examines a tactile sensor and a thermal sensor for use with the Utah-MIT dexterous four fingered hand. Sensory feedback is critical or full utilization of its advanced manipulatory capabilities. The hand itself provides tendon tensions and joint angles information. However, planned control algorithms require more information than these sources can provide. The tactile sensor utilizes capacitive transduction with a novel design based entirely on silicone elastomers. It provides an 8 x 8 array of force cells with 1.9 mm center-to-center spacing. A pressure resolution of 8 significant bits is available over a 0 to 200 grams per square mm range. The thermal sensor measures a material's heat conductivity by radiating heat into an object and measuring the resulting temperature variations. This sensor has a 4 x 4 array of temperature cells with 3.5 mm center-to-center spacing. Experiments show that the thermal sensor can discriminate among material by detecting differences in their thermal conduction properties. Both sensors meet the stringent mounting requirements posed by the Utah-MIT hand. Combining them together to form a sensor with both tactile and thermal capabilities will ultimately be possible. The computational requirements for controlling a sensor equipped dexterous hand are severe. Conventional single processor computers do not provide adequate performance. To overcome these difficulties, a computational architecture based on interconnecting high performance microcomputers and a set of software primitives tailored for sensor driven control has been proposed. The system has been implemented and tested on the Utah-MIT hand. The hand, equipped with tactile and thermal sensors and controlled by its computational architecture, is one of the most advanced robotic manipulatory devices available worldwide. Other ongoing projects will exploit these tools and allow the hand to perform tasks that exceed the capabilities of current generation robots.
Resumo:
Humans can effortlessly manipulate objects in their hands, dexterously sliding and twisting them within their grasp. Robots, however, have none of these capabilities, they simply grasp objects rigidly in their end effectors. To investigate this common form of human manipulation, an analysis of controlled slipping of a grasped object within a robot hand was performed. The Salisbury robot hand demonstrated many of these controlled slipping techniques, illustrating many results of this analysis. First, the possible slipping motions were found as a function of the location, orientation, and types of contact between the hand and object. Second, for a given grasp, the contact types were determined as a function of the grasping force and the external forces on the object. Finally, by changing the grasping force, the robot modified the constraints on the object and affect controlled slipping slipping motions.
Resumo:
This paper describes a new statistical, model-based approach to building a contact state observer. The observer uses measurements of the contact force and position, and prior information about the task encoded in a graph, to determine the current location of the robot in the task configuration space. Each node represents what the measurements will look like in a small region of configuration space by storing a predictive, statistical, measurement model. This approach assumes that the measurements are statistically block independent conditioned on knowledge of the model, which is a fairly good model of the actual process. Arcs in the graph represent possible transitions between models. Beam Viterbi search is used to match measurement history against possible paths through the model graph in order to estimate the most likely path for the robot. The resulting approach provides a new decision process that can be use as an observer for event driven manipulation programming. The decision procedure is significantly more robust than simple threshold decisions because the measurement history is used to make decisions. The approach can be used to enhance the capabilities of autonomous assembly machines and in quality control applications.
Resumo:
This thesis presents a perceptual system for a humanoid robot that integrates abilities such as object localization and recognition with the deeper developmental machinery required to forge those competences out of raw physical experiences. It shows that a robotic platform can build up and maintain a system for object localization, segmentation, and recognition, starting from very little. What the robot starts with is a direct solution to achieving figure/ground separation: it simply 'pokes around' in a region of visual ambiguity and watches what happens. If the arm passes through an area, that area is recognized as free space. If the arm collides with an object, causing it to move, the robot can use that motion to segment the object from the background. Once the robot can acquire reliable segmented views of objects, it learns from them, and from then on recognizes and segments those objects without further contact. Both low-level and high-level visual features can also be learned in this way, and examples are presented for both: orientation detection and affordance recognition, respectively. The motivation for this work is simple. Training on large corpora of annotated real-world data has proven crucial for creating robust solutions to perceptual problems such as speech recognition and face detection. But the powerful tools used during training of such systems are typically stripped away at deployment. Ideally they should remain, particularly for unstable tasks such as object detection, where the set of objects needed in a task tomorrow might be different from the set of objects needed today. The key limiting factor is access to training data, but as this thesis shows, that need not be a problem on a robotic platform that can actively probe its environment, and carry out experiments to resolve ambiguity. This work is an instance of a general approach to learning a new perceptual judgment: find special situations in which the perceptual judgment is easy and study these situations to find correlated features that can be observed more generally.