907 resultados para Robotic Excavation
Resumo:
New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.
Resumo:
The use of 3D data in mobile robotics provides valuable information about the robot’s environment. Traditionally, stereo cameras have been used as a low-cost 3D sensor. However, the lack of precision and texture for some surfaces suggests that the use of other 3D sensors could be more suitable. In this work, we examine the use of two sensors: an infrared SR4000 and a Kinect camera. We use a combination of 3D data obtained by these cameras, along with features obtained from 2D images acquired from these cameras, using a Growing Neural Gas (GNG) network applied to the 3D data. The goal is to obtain a robust egomotion technique. The GNG network is used to reduce the camera error. To calculate the egomotion, we test two methods for 3D registration. One is based on an iterative closest points algorithm, and the other employs random sample consensus. Finally, a simultaneous localization and mapping method is applied to the complete sequence to reduce the global error. The error from each sensor and the mapping results from the proposed method are examined.
Resumo:
This article presents an interactive Java software platform which enables any user to easily create advanced virtual laboratories (VLs) for Robotics. This novel tool provides both support for developing applications with full 3D interactive graphical interface and a complete functional framework for modelling and simulation of arbitrary serial-link manipulators. In addition, its software architecture contains a high number of functionalities included as high-level tools, with the advantage of allowing any user to easily develop complex interactive robotic simulations with a minimum of programming. In order to show the features of the platform, the article describes, step-by-step, the implementation methodology of a complete VL for Robotics education using the presented approach. Finally, some educational results about the experience of implementing this approach are reported.
Resumo:
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.
Resumo:
During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and “automatically and autonomously” obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.
Resumo:
Stroke is a prevalent disorder with immense socioeconomic impact. A variety of chronic neurological deficits result from stroke. In particular, sensorimotor deficits are a significant barrier to achieving post-stroke independence. Unfortunately, the majority of pre-clinical studies that show improved outcomes in animal stroke models have failed in clinical trials. Pre-clinical studies using non-human primate (NHP) stroke models prior to initiating human trials are a potential step to improving translation from animal studies to clinical trials. Robotic assessment tools represent a quantitative, reliable, and reproducible means to assess reaching behaviour following stroke in both humans and NHPs. We investigated the use of robotic technology to assess sensorimotor impairments in NHPs following middle cerebral artery occlusion (MCAO). Two cynomolgus macaques underwent transient MCAO for 90 minutes. Approximately 1.5 years following the procedure these NHPs and two non-stroke control monkeys were trained in a reaching task with both arms in the KINARM exoskeleton. This robot permits elbow and shoulder movements in the horizontal plane. The task required NHPs to make reaching movements from a centrally positioned start target to 1 of 8 peripheral targets uniformly distributed around the first target. We analyzed four movement parameters: reaction time, movement time (MT), initial direction error (IDE), and number of speed maxima to characterize sensorimotor deficiencies. We hypothesized reduced performance in these attributes during a neurobehavioural task with the paretic limb of NHPs following MCAO compared to controls. Reaching movements in the non-affected limbs of control and experimental NHPs showed bell-shaped velocity profiles. In contrast, the reaching movements with the affected limbs were highly variable. We found distinctive patterns in MT, IDE, and number of speed peaks between control and experimental monkeys and between limbs of NHPs with MCAO. NHPs with MCAO demonstrated more speed peaks, longer MTs, and greater IDE in their paretic limb compared to controls. These initial results qualitatively match human stroke subjects’ performance, suggesting that robotic neurobehavioural assessment in NHPs with stroke is feasible and could have translational relevance in subsequent human studies. Further studies will be necessary to replicate and expand on these preliminary findings.
Resumo:
An industrial manipulator equipped with an automatic clay extruder is used to realize a machine that can manufacture additively clay objects. The desired geometries are designed by means of a 3D modeling software and then sliced in a sequence of layers with the same thickness of the extruded clay section. The profiles of each layer are transformed in trajectories for the extruder and therefore for the end-effector of the manipulator. The goal of this thesis is to improve the algorithm for the inverse kinematic resolution and the integration of the routine within the development software that controls the machine (Rhino/Grasshopper). The kinematic model is described by homogeneous transformations, adopting the Denavit-Hartenberg standard convention. The function is implemented in C# and it has been preliminarily tested in Matlab. The outcome of this work is a substantial reduction of the computation time relative to the execution of the algorithm, which is halved.
Resumo:
In clause is given robotic a complex for drilling and milling sandwich shells from polymeric composites. The machining of polymeric composite materials has technological problems. At drilling sandwich shells there is a probability of destruction of a drill from hit of the tool in a partition. The system sensibilization robotic complex for increase of reliability of work of the cutting tool of the small size is offered.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
"For submission to U.S. Army Construction Engineering Research Laboratory, Champaign, Illinois."
Resumo:
"April 1996"