931 resultados para Visual robot control
Resumo:
This paper presents a model of a control system for robot systems inspired by the functionality and organisation of human neuroregulatory system. Our model was specified using software agents within a formal framework and implemented through Web Services. This approach allows the implementation of the control logic of a robot system with relative ease, in an incremental way, using the addition of new control centres to the system as its behaviour is observed or needs to be detailed with greater precision, without the need to modify existing functionality. The tests performed verify that the proposed model has the general characteristics of biological systems together with the desirable features of software, such as robustness, flexibility, reuse and decoupling.
Resumo:
Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be improved by using the event-based control paradigm. The system proposed in this paper is based on the idea of activating the visual controller only when something significant has occurred in the system (e.g. when any visual feature can be loosen because it is going outside the frame). Different event triggers have been defined in the image space in order to activate or deactivate the visual controller. The tests implemented to validate the proposal have proved that this new scheme avoids visual features to go out of the image whereas the system complexity is reduced considerably. Events can be used in the future to change different parameters of the visual servoing systems.
Resumo:
Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme proposed in this paper can reduce the processing time at each loop iteration in some specific conditions. The proposed control method enters in action when an event deactivates the classical image-based controller (i.e. when there is no image available to perform the tracking of the visual features). A virtual camera is then moved through a straight line path towards the desired position. The virtual path used to guide the robot improves the behavior of the previous event-based visual servoing proposal.
Resumo:
The control and coordination of multiple mobile robots is a challenging task; particularly in environments with multiple, rapidly moving obstacles and agents. This paper describes a robust approach to multi-robot control, where robustness is gained from competency at every layer of robot control. The layers are: (i) a central coordination system (MAPS), (ii) an action system (AES), (iii) a navigation module, and (iv) a low level dynamic motion control system. The multi-robot coordination system assigns each robot a role and a sub-goal. Each robot’s action execution system then assumes the assigned role and attempts to achieve the specified sub-goal. The robot’s navigation system directs the robot to specific goal locations while ensuring that the robot avoids any obstacles. The motion system maps the heading and speed information from the navigation system to force-constrained motion. This multi-robot system has been extensively tested and applied in the robot soccer domain using both centralized and distributed coordination.
Resumo:
The paper is related with the problem of developing autonomous intelligent robots for complex environments. In details it outlines a knowledge-based robot control architecture that combines several techniques in order to supply an ability to adapt and act autonomously in complex environments. The described architecture has been implemented as a robotic system that demonstrates its operation in dynamic environment. Although the robotic system demonstrates a certain level of autonomy, the experiments show that there are situation, in which the developed base architecture should be complemented with additional modules. The last few chapters of the paper describe the experimentation results and the current state of further research towards the developed architecture.
Resumo:
The paper deals with a problem of intelligent system’s design for complex environments. There is discussed a possibility to integrate several technologies into one basic structure that could form a kernel of an autonomous intelligent robotic system. One alternative structure is proposed in order to form a basis of an intelligent system that would be able to operate in complex environments. The proposed structure is very flexible because of features that allow adapting via learning and adjustment of the used knowledge. Therefore, the proposed structure may be used in environments with stochastic features such as hardly predictable events or elements. The basic elements of the proposed structure have found their implementation in software system and experimental robotic system. The software system as well as the robotic system has been used for experimentation in order to validate the proposed structure - its functionality, flexibility and reliability. Both of them are presented in the paper. The basic features of each system are presented as well. The most important results of experiments are outlined and discussed at the end of the paper. Some possible directions of further research are also sketched at the end of the paper.
Resumo:
Jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. The theory has proved the most popular Artificial Immune System (AIS) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic AIS network with a Reinforcement Learning based control system (RL) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic RL, a simplified hybrid AIS-RL that implements idiotypic selection independently of derived concentration levels and a full hybrid AIS-RL scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.
Resumo:
Jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. The theory has proved the most popular Artificial Immune System (AIS) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic AIS network with a Reinforcement Learning based control system (RL) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic RL, a simplified hybrid AIS-RL that implements idiotypic selection independently of derived concentration levels and a full hybrid AIS-RL scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.
Resumo:
This article describes a new visual servo control and strategies that are used to carry out dynamic tasks by the Robotenis platform. This platform is basically a parallel robot that is equipped with an acquisition and processing system of visual information, its main feature is that it has a completely open architecture control, and planned in order to design, implement, test and compare control strategies and algorithms (visual and actuated joint controllers). Following sections describe a new visual control strategy specially designed to track and intercept objects in 3D space. The results are compared with a controller shown in previous woks, where the end effector of the robot keeps a constant distance from the tracked object. In this work, the controller is specially designed in order to allow changes in the tracking reference. Changes in the tracking reference can be used to grip an object that is under movement, or as in this case, hitting a hanging Ping-Pong ball. Lyapunov stability is taken into account in the controller design.
Resumo:
This paper presents a new dynamic visual control system for redundant robots with chaos compensation. In order to implement the visual servoing system, a new architecture is proposed that improves the system maintainability and traceability. Furthermore, high performance is obtained as a result of parallel execution of the different tasks that compose the architecture. The control component of the architecture implements a new visual servoing technique for resolving the redundancy at the acceleration level in order to guarantee the correct motion of both end-effector and joints. The controller generates the required torques for the tracking of image trajectories. However, in order to guarantee the applicability of this technique, a repetitive path tracked by the robot-end must produce a periodic joint motion. A chaos controller is integrated in the visual servoing system and the correct performance is observed in low and high velocities. Furthermore, a method to adjust the chaos controller is proposed and validated using a real three-link robot.
Resumo:
Työ sisältää ohjaislaitteiston vertailun ja valinnan rinnakkaisrakenteista robottia varten sekä kunnonvalvontajärjestelmän periaatteiden laadinnan kyseistä robottia varten. Ohjauslaitteisto sisältää teollisuustietokoneen sekä kenttäväylän. Sekä tietokoneesta että väylästä on teoriaosuus ja yksityiskohtaisempi valintaosuus. Teoriaosuudessa selitetään tarkemmin laitteiden toimintaperiaatteista. Valintaosuudessa kerrotaanmiksi jokin tietty laite on valittu käytettäväksi robotin ohjauksessa. Kunnonvalvontateoria ja rinnakkaisrakenteisen robotin kunnonvalvonnan keinot ovat työn toinen osa. Teoriaosa sisältää yleisluonteisen selvityksen vikaantumisesta ja valvonnasta. Erikoisrobotin kunnonvalvonnan keinot esitetään työssä tietyssä järjestyksessä. Ensin esitetään mahdolliset vikatilanteet. Toisessa kohdassa havainnollistetaan vikojen havaitseminen.
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
Brain computer interface (BCI) is a kind of human machine interface, which provides a new interaction method between human and computer or other equipment. The most significant characteristic of BCI system is that its control input is brain electrical activities acquired from the brain instead of traditional input such as hands or eyes. BCI technique has rapidly developed during last two decades and it has mainly worked as an auxiliary technique to help the disable people improve their life qualities. With the appearance of low cost novel electrical devices such as EMOTIV, BCI technique has been applied to the general public through many useful applications including video gaming, virtual reality and virtual keyboard. The purpose of this research is to be familiar with EMOTIV EPOC system and make use of it to build an EEG based BCI system for controlling an industrial manipulator by means of human thought. To build a BCI system, an acquisition program based on EMOTIV EPOC system is designed and a MFC based dialog that works as an operation panel is presented. Furthermore, the inverse kinematics of RV-3SB industrial robot was solved. In the last part of this research, the designed BCI system with human thought input is examined and the results indicate that the system is running smoothly and displays clearly the motion type and the incremental displacement of the motion.