957 resultados para Robot control
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Educação para a Ciência - FC
Resumo:
Esta dissertação de mestrado apresenta o projeto e a construção de um robô móvel terrestre denominado LOGBOT, com tração de movimento do tipo diferencial – com duas rodas motoras e uma roda livre para manter a estabilidade de sua estrutura em relação à superfície. O controle do robô dispõe dos modos de telemetria e autônomo. No modo de controle por telemetria (ROV), a comunicação do robô com a estação de controle é feita por radiofreqüência a uma distância de até um quilometro em ambientes externos, e até cem metros em ambientes internos. No modo de controle autônomo (AGV), o robô tem habilidade para navegar em ambientes internos e desconhecidos usando sempre a parede à sua esquerda como referência para a trajetória de seu movimento. A seqüência de movimentos para execução da trajetória é enviada para a estação de controle que realiza análises de desempenho do robô. Para executar suas tarefas no modo autônomo, a programação do robô conta com um agente inteligente reativo, que detecta características do ambiente (obstáculos, final de paredes, etc.) e decide sobre qual atitude deve ser executada pelo robô, com objetivo de contornar os obstáculos e controlar a velocidade de suas rodas. Os problemas de erro odométrico e suas correções com base no uso de informações sensoriais externas são devidamente tratados. Técnicas de controle hierárquico do robô como um todo e controle em malha fechada da velocidade das rodas do robô são usadas. Os resultados mostraram que o robô móvel LOGBOT é capaz de navegar, com estabilidade e precisão, em ambientes internos no formato de um corredor (wall following).
Resumo:
The use of mobile robots turns out to be interesting in activities where the action of human specialist is difficult or dangerous. Mobile robots are often used for the exploration in areas of difficult access, such as rescue operations and space missions, to avoid human experts exposition to risky situations. Mobile robots are also used in agriculture for planting tasks as well as for keeping the application of pesticides within minimal amounts to mitigate environmental pollution. In this paper we present the development of a system to control the navigation of an autonomous mobile robot through tracks in plantations. Track images are used to control robot direction by pre-processing them to extract image features. Such features are then submitted to a support vector machine and an artificial neural network in order to find out the most appropriate route. A comparison of the two approaches was performed to ascertain the one presenting the best outcome. The overall goal of the project to which this work is connected is to develop a real time robot control system to be embedded into a hardware platform. In this paper we report the software implementation of a support vector machine and of an artificial neural network, which so far presented respectively around 93% and 90% accuracy in predicting the appropriate route. (C) 2013 The Authors. Published by Elsevier B.V. Selection and peer review under responsibility of the organizers of the 2013 International Conference on Computational Science
Resumo:
The use of mobile robots in the agriculture turns out to be interesting in tasks of cultivation and application of pesticides in minute quantities to reduce environmental pollution. In this paper we present the development of a system to control an autonomous mobile robot navigation through tracks in plantations. Track images are used to control robot direction by preprocessing them to extract image features, and then submitting such characteristic features to a support vector machine to find out the most appropriate route. As the overall goal of the project to which this work is connected is the robot control in real time, the system will be embedded onto a hardware platform. However, in this paper we report the software implementation of a support vector machine, which so far presented around 93% accuracy in predicting the appropriate route.
Resumo:
Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be improved by using the event-based control paradigm. The system proposed in this paper is based on the idea of activating the visual controller only when something significant has occurred in the system (e.g. when any visual feature can be loosen because it is going outside the frame). Different event triggers have been defined in the image space in order to activate or deactivate the visual controller. The tests implemented to validate the proposal have proved that this new scheme avoids visual features to go out of the image whereas the system complexity is reduced considerably. Events can be used in the future to change different parameters of the visual servoing systems.
Resumo:
Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme proposed in this paper can reduce the processing time at each loop iteration in some specific conditions. The proposed control method enters in action when an event deactivates the classical image-based controller (i.e. when there is no image available to perform the tracking of the visual features). A virtual camera is then moved through a straight line path towards the desired position. The virtual path used to guide the robot improves the behavior of the previous event-based visual servoing proposal.
Resumo:
This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.
Resumo:
A survey of the existing state-of-the-art of turbine blade manufacture highlights two operations that have not been automated namely that of loading of a turbine blade into an encapsulation die, and that of removing a machined blade from the encapsulation block. The automation of blade decapsulation has not been pursued. In order to develop a system to automate the loading of an encapsulation die a prototype mechanical handling robot has been designed together with a computer controlled encapsulation die. The robot has been designed as a mechanical handling robot of cylindrical geometry, suitable for use in a circular work cell. It is the prototype for a production model to be called `The Cybermate'. The prototype robot is mechanically complete but due to unforeseen circumstances the robot control system is not available (the development of the control system did not form a part of this project), hence it has not been possible to fully test and assess the robot mechanical design. Robot loading of the encapsulation die has thus been simulated. The research work with regard to the encapsulation die has focused on the development of computer controlled, hydraulically actuated, location pins. Such pins compensate for the inherent positional inaccuracy of the loading robot and reproduce the dexterity of the human operator. Each pin comprises a miniature hydraulic cylinder, controlled by a standard bidirectional flow control valve. The precision positional control is obtained through pulsing of the valves under software control, with positional feedback from an 8-bit transducer. A test-rig comprising one hydraulic location pin together with an opposing spring loaded pin has demonstrated that such a pin arrangement can be controlled with a repeatability of +/-.00045'. In addition this test-rig has demonstrated that such a pin arrangement can be used to gauge and compensate for the dimensional error of the component held between the pins, by offsetting the pin datum positions to allow for the component error. A gauging repeatability of +/- 0.00015' was demonstrated. This work has led to the design and manufacture of an encapsulation die comprising ten such pins and the associated computer software. All aspects of the control software except blade gauging and positional data storage have been demonstrated. Work is now required to achieve the accuracy of control demonstrated by the single pin test-rig, with each of the ten pins in the encapsulation die. This would allow trials of the complete loading cycle to take place.
Resumo:
The main objective of this work was to enable the recognition of human gestures through the development of a computer program. The program created captures the gestures executed by the user through a camera attached to the computer and sends it to the robot command referring to the gesture. They were interpreted in total ve gestures made by human hand. The software (developed in C ++) widely used the computer vision concepts and open source library OpenCV that directly impact the overall e ciency of the control of mobile robots. The computer vision concepts take into account the use of lters to smooth/blur the image noise reduction, color space to better suit the developer's desktop as well as useful information for manipulating digital images. The OpenCV library was essential in creating the project because it was possible to use various functions/procedures for complete control lters, image borders, image area, the geometric center of borders, exchange of color spaces, convex hull and convexity defect, plus all the necessary means for the characterization of imaged features. During the development of the software was the appearance of several problems, as false positives (noise), underperforming the insertion of various lters with sizes oversized masks, as well as problems arising from the choice of color space for processing human skin tones. However, after the development of seven versions of the control software, it was possible to minimize the occurrence of false positives due to a better use of lters combined with a well-dimensioned mask size (tested at run time) all associated with a programming logic that has been perfected over the construction of the seven versions. After all the development is managed software that met the established requirements. After the completion of the control software, it was observed that the overall e ectiveness of the various programs, highlighting in particular the V programs: 84.75 %, with VI: 93.00 % and VII with: 94.67 % showed that the nal program performed well in interpreting gestures, proving that it was possible the mobile robot control through human gestures without the need for external accessories to give it a better mobility and cost savings for maintain such a system. The great merit of the program was to assist capacity in demystifying the man set/machine therefore uses an easy and intuitive interface for control of mobile robots. Another important feature observed is that to control the mobile robot is not necessary to be close to the same, as to control the equipment is necessary to receive only the address that the Robotino passes to the program via network or Wi-Fi.
Resumo:
Dragonflies demonstrate unique and superior flight performances than most of the other insect species and birds. They are equipped with two pairs of independently controlled wings granting an unmatchable flying performance and robustness. In this paper, the dynamics of a dragonfly-inspired robot is studied. The system performance is analyzed in terms of time response and robustness. The development of computational simulation based on the dynamics of the robotic dragonfly allows the test of different control algorithms. We study different movements, the dynamics, and the level of dexterity in wing motion of the dragonfly. The results are positive for the construction of flying platforms that effectively mimic the kinematics and dynamics of dragonflies and potentially exhibit superior flight performance than existing flying platforms.
Resumo:
This paper studies the performance of integer and fractional order controllers in a hexapod robot with joints at the legs having viscous friction and flexibility. For that objective the robot prescribed motion is characterized in terms of several locomotion variables. The controller performance is analised through the Nyquist stability criterion. A set of model-based experiments reveals the influence of the different controller implementations upon the proposed metrics.
Resumo:
Dragonflies demonstrate unique and superior flight performances than most of the other insect species and birds. They are equipped with two pairs of independently controlled wings granting an unmatchable flying performance and robustness. In this paper it is studied the dynamics of a dragonfly-inspired robot. The system performance is analyzed in terms of time response and robustness. The development of computational simulation based on the dynamics of the robotic dragonfly allows the test of different control algorithms. We study different movement, the dynamics and the level of dexterity in wing motion of the dragonfly. The results are positive for the construction of flying platforms that effectively mimic the kinematics and dynamics of dragonflies and potentially exhibit superior flight performance than existing flying platforms.
Resumo:
6th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Catania, Italy, 17-19 September
Resumo:
This article aims to apply the concepts associated with artificial neural networks (ANN) in the control of an autonomous robot system that is intended to be used in competitions of robots. The robot was tested in several arbitrary paths in order to verify its effectiveness. The results show that the robot performed the tasks with success. Moreover, in the case of arbitrary paths the ANN control outperforms other methodologies, such as fuzzy logic control (FLC).