917 resultados para Robot Arm
Resumo:
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.
Resumo:
The main objective of this master's thesis is to study robot programming using simulation software, and also how to embed the simulation software into company's own robot controlling software. The further goal is to study a new communication interface to the assembly line's components -more precisely how to connect the robot cell into this new communication system. Conveyor lines are already available where the conveyors use the new communication standard. The robot cell is not yet capable of communicating with to other devices using the new communication protocols. The main problem among robot manufacturers is that they all have their own communication systems and programming languages. There has not been any common programming language to program all the different robot manufacturers robots, until the RRS (Realistic Robot Simulation) standards were developed. The RRS - II makes it possible to create the robot programs in the simulation software and it gives a common user interface for different robot manufacturers robots. This thesis will present the RRS - II standard and the robot manufacturers situation for the RRS - II support. Thesis presents how the simulation software can be embedded into company's own robot controlling software and also how the robot cell can be connected to the CAMX (Computer Aided Manufacturing using XML) communication system.
Resumo:
Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part- the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2-C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions.
Resumo:
The integration of the human brain with computers is an interesting new area of applied neuroscience, where one application is replacement of a person"s real body by a virtual representation. Here we demonstrate that a virtual limb can be made to feel part of your body if appropriate multisensory correlations are provided. We report an illusion that is invoked through tactile stimulation on a person"s hidden real right hand with synchronous virtual visual stimulation on an aligned 3D stereo virtual arm projecting horizontally out of their shoulder. An experiment with 21 male participants showed displacement of ownership towards the virtual hand, as illustrated by questionnaire responses and proprioceptive drift. A control experiment with asynchronous tapping was carried out with a different set of 20 male participants who did not experience the illusion. After 5 min of stimulation the virtual arm rotated. Evidence suggests that the extent of the illusion was also correlated with the degree of muscle activity onset in the right arm as measured by EMG during this period that the arm was rotating, for the synchronous but not the asynchronous condition. A completely virtual object can therefore be experienced as part of one"s self, which opens up the possibility that an entire virtual body could be felt as one"s own in future virtual reality applications or online games, and be an invaluable tool for the understanding of the brain mechanisms underlying body ownership.
Resumo:
BACKGROUND: The objective measurement of dominant/nondominant arm use proportion in daily life may provide relevant information on healthy and pathologic arm behavior. This prospective case-control study explored the potential of such measurements as indicators of upper limb functional recovery after rotator cuff surgery. METHODS: Data on dominant/nondominant arm usage were acquired with body-worn sensors for 7 hours. The postsurgical arm usage of 21 patients was collected at 3, 6, and 12 months after rotator cuff surgery in the sitting, walking, and standing postures and compared with a reference established with 41 healthy subjects. The results were calculated for the dominant and nondominant surgical side subgroups at all stages. The correlations with clinical scores were calculated. RESULTS: Healthy right-handed and left-handed dominant arm usage was 60.2% (±6.3%) and 53.4% (±6.6%), respectively. Differences in use of the dominant side were significant between the right- and left-handed subgroups for sitting (P = .014) and standing (P = .009) but not for walking (P = .328). The patient group showed a significant underuse of 10.7% (±8.9%) at 3 months after surgery (P < .001). The patients recovered normal arm usage within 12 months, regardless of surgical side. The arm underuse measurement was weakly related to function and pain scores. CONCLUSION: This study provided new information on arm recovery after rotator cuff surgery using an innovative measurement method. It highlighted that objective arm underuse measurement is a valuable indicator of upper limb postsurgical outcome that captures a complementary feature to clinical scores.
Resumo:
L’objectiu del treball és emular virtualment l’entorn de treball del robot Stäubli Tx60 quehi ha al laboratori de robòtica de la UdG (dins les possibilitats que ofereix el software adquirit).Aquest laboratori intenta reproduir un entorn industrial de treball en el qual es realitzal’assemblatge d’un conjunt de manera cent per cent automatitzada.En una primera fase, s’ha dissenyat en tres dimensions tot l’entorn de treball que hi hadisponible al laboratori a través del software CAD SolidWorks. Cada un dels conjuntsque conformen l’estació de treball s’ha dissenyat de manera independent.Posteriorment s’introdueixen tots els elements dissenyats dins el software StäubliRobotics Suite 2013. Amb tot l’anterior, cal remarcar que l’objectiu principal del treball consta de duesetapes. Inicialment es dissenya el model 3D de l’entorn de treball a través del software SolidWorks i s’introdueix dins el software Stäubli Robotics Suite 2013. Enuna segona etapa, es realitza un manual d’ús del nou software de robòtica
Resumo:
Creació d’un sistema format per un algoritme genètic que permeti dissenyar de forma automática, les dades dels valors lingüístics d’un controlador fuzzy, per a un robot amb tracció diferencial. Les dades que s’han d’obtenir han de donar-li al robot, la capacitat d’arribar a un destí, evitant els obstacles que vagi trobant al llarg del camí
Resumo:
Durante toda la evolución de la tecnología, se han empleado aparatos interconexionados por cables. Los cables limitan la libertad de movimiento del usuario y pueden captar interferencias entre ellos si la red de cableado es elevada. Mientras avanzaba la tecnología inalámbrica, se ha ido adaptando al equipamiento electrónico a la vez que se iban haciendo cada vez más pequeños. Por esto, se impone la necesidad de utilizarlos como controles a distancia sin el empleo de cables debido a los inconvenientes que estos conllevan. El presente trabajo, pretende unificar tres tecnologías que pueden tener en el futuro una gran afinidad. · Dispositivos basados en el sistema Android. Desde sus inicios, han tenido una evolución meteórica. Se han ido haciendo cada vez más rápidos y mejores. · Sistemas inalámbricos. Los sistemas wifi o bluetooth, se han ido incorporando a nuestras vidas cada vez más y están prácticamente en cualquier aparato. · Robótica. Cualquier proceso de producción incorpora un robot. Son necesarios para hacer muchos trabajos que, aunque el hombre lo puede realizar, un robot reduce los tiempos y la peligrosidad de los procesos. Aunque las dos primeras tecnologías van unidas, ¿quién no tiene un teléfono con conexión wifi y bluetooth?, pocos diseños aúnan estos campos con la Robótica. El objetivo final de este trabajo es realizar una aplicación en Android para el control remoto de un robot, empleando el sistema de comunicación inalámbrico. La aplicación desarrollada, permite controlar el robot a conveniencia del usuario en un entorno táctil/teledirigido. Gracias a la utilización de simulador en ambos lenguajes (RAPID y Android), ha sido posible realizar la programación sin tener que estar presente ante el robot objeto de este trabajo. A través de su progreso, se ha ido evolucionando en la cantidad de datos enviados al robot y complejidad en su procesamiento, a la vez que se ha mejorado en la estética de la aplicación. Finalmente se usó la aplicación desarrollada con el robot, consiguiendo con éxito que realizara los movimientos que eran enviados con la tablet programada.
Resumo:
Over the last decades, calibration techniques have been widely used to improve the accuracy of robots and machine tools since they only involve software modification instead of changing the design and manufacture of the hardware. Traditionally, there are four steps are required for a calibration, i.e. error modeling, measurement, parameter identification and compensation. The objective of this thesis is to propose a method for the kinematics analysis and error modeling of a newly developed hybrid redundant robot IWR (Intersector Welding Robot), which possesses ten degrees of freedom (DOF) where 6-DOF in parallel and additional 4-DOF in serial. In this article, the problem of kinematics modeling and error modeling of the proposed IWR robot are discussed. Based on the vector arithmetic method, the kinematics model and the sensitivity model of the end-effector subject to the structure parameters is derived and analyzed. The relations between the pose (position and orientation) accuracy and manufacturing tolerances, actuation errors, and connection errors are formulated. Computer simulation is performed to examine the validity and effectiveness of the proposed method.
Resumo:
Robotic platforms have advanced greatly in terms of their remote sensing capabilities, including obtaining optical information using cameras. Alongside these advances, visual mapping has become a very active research area, which facilitates the mapping of areas inaccessible to humans. This requires the efficient processing of data to increase the final mosaic quality and computational efficiency. In this paper, we propose an efficient image mosaicing algorithm for large area visual mapping in underwater environments using multiple underwater robots. Our method identifies overlapping image pairs in the trajectories carried out by the different robots during the topology estimation process, being this a cornerstone for efficiently mapping large areas of the seafloor. We present comparative results based on challenging real underwater datasets, which simulated multi-robot mapping
Resumo:
This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile.
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
Deflection compensation of flexible boom structures in robot positioning is usually done using tables containing the magnitude of the deflection with inverse kinematics solutions of a rigid structure. The number of table values increases greatly if the working area of the boom is large and the required positioning accuracy is high. The inverse kinematics problems are very nonlinear, and if the structure is redundant, in some cases it cannot be solved in a closed form. If the structural flexibility of the manipulator arms is taken into account, the problem is almost impossible to solve using analytical methods. Neural networks offer a possibility to approximate any linear or nonlinear function. This study presents four different methods of using neural networks in the static deflection compensation and inverse kinematics solution of a flexible hydraulically driven manipulator. The training information required for training neural networks is obtained by employing a simulation model that includes elasticity characteristics. The functionality of the presented methods is tested based on the simulated and measured results of positioning accuracy. The simulated positioning accuracy is tested in 25 separate coordinate points. For each point, the positioning is tested with five different mass loads. The mean positioning error of a manipulator decreased from 31.9 mm to 4.1 mm in the test points. This accuracy enables the use of flexible manipulators in the positioning of larger objects. The measured positioning accuracy is tested in 9 separate points using three different mass loads. The mean positioning error decreased from 10.6 mm to 4.7 mm and the maximum error from 27.5 mm to 11.0 mm.