956 resultados para Robot vision systems
Resumo:
This paper studies periodic gaits of multi-legged locomotion systems based on dynamic models. The purpose is to determine the system performance during walking and the best set of locomotion variables. For that objective the prescribed motion of the robot is completely characterized in terms of several locomotion variables such as gait, duty factor, body height, step length, stroke pitch, foot clearance, legs link lengths, foot-hip offset, body and legs mass and cycle time. In this perspective, we formulate three performance measures of the walking robot namely, the mean absolute energy, the mean power dispersion and the mean power lost in the joint actuators per walking distance. A set of model-based experiments reveals the influence of the locomotion variables in the proposed indices.
Resumo:
Many tasks involving manipulation require cooperation between robots. Meanwhile, it is necessary to determine the adequate values for the robot parameters to obtain a good performence. This paper discusses several aspects related with the manipulability of two co-operative robots when handling objects with different lengths and orientations. In this line of thought, a numerical tool is developed for the calculation and the graphical visualization of the manipulability measure.
Resumo:
4th International Conference on Climbing and Walking Robots - From Biology to Industrial Applications
Resumo:
The trend to have more cooperative play and the increase of game dynamics in Robocup MSL League motivates the improvement of skills for ball passing and reception. Currently the majority of the MSL teams uses ball handling devices with rollers to have more precise kicks but limiting the capability to kick a moving ball without stopping it and grabbing it. This paper addresses the problem to receive and kick a fast moving ball without having to grab it with a roller based ball handling device. Here, the main difficulty is the high latency and low rate of the measurements of the ball sensing systems, based in vision or laser scanner sensors.Our robots use a geared leg coupled to a motor that acts simultaneously as the kicking device and low level ball sensor. This paper proposes a new method to improve the capability for ball sensing in the kicker, by combining high rate measurements from the torque and energy in the motor and angular position of the kicker leg. The developed method endows the kicker device with an effective ball detection ability, validated in several game situations like in an interception to a fast pass or when chasing the ball where the relative speed from robot to ball is low. This can be used to optimize the kick instant or by the embedded kicker control system to absorb the ball energy.
Resumo:
13th International Conference on Autonomous Robot Systems (Robotica), 2013, Lisboa
Resumo:
13th International Conference on Autonomous Robot Systems (Robotica), 2013
Resumo:
The paper presents a multi-robot cooperative framework to estimate the 3D position of dynamic targets, based on bearing-only vision measurements. The uncertainty of the observation provided by each robot equipped with a bearing-only vision system is effectively addressed for cooperative triangulation purposes by weighing the contribution of each monocular bearing ray in a probabilistic manner. The envisioned framework is evaluated in an outdoor scenario with a team of heterogeneous robots composed of an Unmanned Ground and Aerial Vehicle.
Resumo:
Os sistemas de perceção visual são das principais fontes de informação sensorial utilizadas pelos robôs autónomos, para localização e navegação em diferentes meios de operação. O objetivo passa por obter uma grande quantidade de informação sobre o ambiente que a câmara está a visualizar, processar e extrair informação que permita realizar as tarefas de uma forma e ciente. Uma informação em particular que os sistemas de visão podem fornecer, e a informação tridimensional acerca do meio envolvente. Esta informação pode ser adquirida recorrendo a sistemas de visão monoculares ou com múltiplas câmaras. Nestes sistemas a informação tridimensional pode ser obtida recorrendo a técnica de triangulação, tirando partido do conhecimento da posição relativa entre as câmaras. No entanto, para calcular as coordenadas de um ponto tridimensional no referencial da câmara e necessário existir correspondência entre pontos comuns às imagens adquiridas pelo sistema. No caso de más correspondências a informação 3D e obtida de forma incorreta. O problema associado à correspondência de pontos pode ser agravado no caso das câmaras do sistema terem características intrínsecas diferentes nomeadamente: resolução, abertura da lente, distorção. Outros fatores como as orientações e posições das câmaras também podem condicionar a correspondência de pontos. Este trabalho incide sobre problemática de correspondência de pontos existente no processo de cálculo da informação tridimensional. A presente dissertação visa o desenvolvimento de uma abordagem de correspondência de pontos para sistemas de visão no qual é conhecida a posição relativa entre câmaras.
Resumo:
The design of work organisation systems with automated equipment is facing new challenges and the emergence of new concepts. The social aspects that are related with new concepts on the complex work environments (CWE) are becoming more relevant for that design. The work with autonomous systems implies options in the design of workplaces. Especially that happens in such complex environments. The concepts of “agents”, “co-working” or “human-centred technical systems” reveal new dimensions related to human-computer interaction (HCI). With an increase in the number and complexity of those human-technology interfaces, the capacities of human intervention can become limited, originating further problems. The case of robotics is used to exemplify the issues related with automation in working environments and the emergence of new HCI approaches that would include social implications. We conclude that studies on technology assessment of industrial robotics and autonomous agents on manufacturing environment should also focus on the human involvement strategies in organisations. A needed participatory strategy implies a new approach to workplaces design. This means that the research focus must be on the relation between technology and social dimensions not as separate entities, but integrated in the design of an interaction system.
Resumo:
Nowadays, several sensors and mechanisms are available to estimate a mobile robot trajectory and location with respect to its surroundings. Usually absolute positioning mechanisms are the most accurate, but they also are the most expensive ones, and require pre installed equipment in the environment. Therefore, a system capable of measuring its motion and location within the environment (relative positioning) has been a research goal since the beginning of autonomous vehicles. With the increasing of the computational performance, computer vision has become faster and, therefore, became possible to incorporate it in a mobile robot. In visual odometry feature based approaches, the model estimation requires absence of feature association outliers for an accurate motion. Outliers rejection is a delicate process considering there is always a trade-off between speed and reliability of the system. This dissertation proposes an indoor 2D position system using Visual Odometry. The mobile robot has a camera pointed to the ceiling, for image analysis. As requirements, the ceiling and the oor (where the robot moves) must be planes. In the literature, RANSAC is a widely used method for outlier rejection. However, it might be slow in critical circumstances. Therefore, it is proposed a new algorithm that accelerates RANSAC, maintaining its reliability. The algorithm, called FMBF, consists on comparing image texture patterns between pictures, preserving the most similar ones. There are several types of comparisons, with different computational cost and reliability. FMBF manages those comparisons in order to optimize the trade-off between speed and reliability.
Resumo:
Teleoperation is a concept born with the rapid evolution of technology, with an intuitive meaning "operate at a distance." The first teleoperation system was created in the mid 1950s, which were handled chemicals. Remote controlled systems are present nowadays in various types of applications. This dissertation presents the development of a mobile application to perform the teleoperation of a mobile service robot. The application integrates a distributed surveillance (the result of a research project QREN) and led to the development of a communication interface between the robot (the result of another QREN project) and the vigilance system. It was necessary to specify a communication protocol between the two systems, which was implemented over a communication framework 0MQ (Zero Message Queue). For the testing, three prototype applications were developed before to perform the test on the robot.
Resumo:
This dissertation aims to guarantee the integration of a mobile autonomous robot equipped with many sensors in a multi-agent distributed and georeferenced surveillance system. The integration of a mobile autonomous robot in this system leads to new features that will be available to clients of surveillance system may use. These features may be of two types: using the robot as an agent that will act in the environment or by using the robot as a mobile set of sensors. As an agent in the system, the robot can move to certain locations when alerts are received, in order to acknowledge the underlying events or take to action in order to assist in resolving this event. As a sensor platform in the system, it is possible to access information that is read from the sensors of the robot and access complementary measurements to the ones taken by other sensors in the multi-agent system. To integrate this mobile robot in an effective way it is necessary to extend the current multi-agent system architecture to make the connection between the two systems and to integrate the functionalities provided by the robot into the multi-agent system.
Resumo:
In the current global and competitive business context, it is essential that enterprises adapt their knowledge resources in order to smoothly interact and collaborate with others. However, due to the existent multiculturalism of people and enterprises, there are different representation views of business processes or products, even inside a same domain. Consequently, one of the main problems found in the interoperability between enterprise systems and applications is related to semantics. The integration and sharing of enterprises knowledge to build a common lexicon, plays an important role to the semantic adaptability of the information systems. The author proposes a framework to support the development of systems to manage dynamic semantic adaptability resolution. It allows different organisations to participate in a common knowledge base building, letting at the same time maintain their own views of the domain, without compromising the integration between them. Thus, systems are able to be aware of new knowledge, and have the capacity to learn from it and to manage its semantic interoperability in a dynamic and adaptable way. The author endorses the vision that in the near future, the semantic adaptability skills of the enterprise systems will be the booster to enterprises collaboration and the appearance of new business opportunities.
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Resumo:
Tese de Doutoramento Programa Doutoral em Engenharia Electrónica e Computadores