984 resultados para Mobile robotics


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a novel mobile sink area allocation scheme for consumer based mobile robotic devices with a proven application to robotic vacuum cleaners. In the home or office environment, rooms are physically separated by walls and an automated robotic cleaner cannot make a decision about which room to move to and perform the cleaning task. Likewise, state of the art cleaning robots do not move to other rooms without direct human interference. In a smart home monitoring system, sensor nodes may be deployed to monitor each separate room. In this work, a quad tree based data gathering scheme is proposed whereby the mobile sink physically moves through every room and logically links all separated sub-networks together. The proposed scheme sequentially collects data from the monitoring environment and transmits the information back to a base station. According to the sensor nodes information, the base station can command a cleaning robot to move to a specific location in the home environment. The quad tree based data gathering scheme minimizes the data gathering tour length and time through the efficient allocation of data gathering areas. A calculated shortest path data gathering tour can efficiently be allocated to the robotic cleaner to complete the cleaning task within a minimum time period. Simulation results show that the proposed scheme can effectively allocate and control the cleaning area to the robot vacuum cleaner without any direct interference from the consumer. The performance of the proposed scheme is then validated with a set of practical sequential data gathering tours in a typical office/home environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an autonomous docking system and web interface that allows long-term unaided use of a sophisticated robot by untrained web users. These systems have been applied to the biologically inspired RatSLAM system as a foundation for testing both its long-term stability and its practicality. While docking and web interface systems already exist, this system allows for a significantly larger margin of error in docking accuracy due to the mechanical design, thereby increasing robustness against navigational errors. Also a standard vision sensor is used for both long-range and short-range docking, compared to the many systems that require both omni-directional cameras and high resolution Laser range finders for navigation. The web interface has been designed to accommodate the significant delays experienced on the Internet, and to facilitate the non- Cartesian operation of the RatSLAM system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The practice of robotics and computer vision each involve the application of computational algorithms to data. The research community has developed a very large body of algorithms but for a newcomer to the field this can be quite daunting. For more than 10 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This new book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and computer vision. It is written in a light but informative style, it is easy to read and absorb, and includes over 1000 MATLAB® and Simulink® examples and figures. The book is a real walk through the fundamentals of mobile robots, navigation, localization, arm-robot kinematics, dynamics and joint level control, then camera models, image processing, feature extraction and multi-view geometry, and finally bringing it all together with an extensive discussion of visual servo systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ninth release of the Toolbox, represents over fifteen years of development and a substantial level of maturity. This version captures a large number of changes and extensions generated over the last two years which support my new book “Robotics, Vision & Control”. The Toolbox has always provided many functions that are useful for the study and simulation of classical arm-type robotics, for example such things as kinematics, dynamics, and trajectory generation. The Toolbox is based on a very general method of representing the kinematics and dynamics of serial-link manipulators. These parameters are encapsulated in MATLAB ® objects - robot objects can be created by the user for any serial-link manipulator and a number of examples are provided for well know robots such as the Puma 560 and the Stanford arm amongst others. The Toolbox also provides functions for manipulating and converting between datatypes such as vectors, homogeneous transformations and unit-quaternions which are necessary to represent 3-dimensional position and orientation. This ninth release of the Toolbox has been significantly extended to support mobile robots. For ground robots the Toolbox includes standard path planning algorithms (bug, distance transform, D*, PRM), kinodynamic planning (RRT), localization (EKF, particle filter), map building (EKF) and simultaneous localization and mapping (EKF), and a Simulink model a of non-holonomic vehicle. The Toolbox also including a detailed Simulink model for a quadcopter flying robot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our everyday environment is full of text but this rich source of information remains largely inaccessible to mobile robots. In this paper we describe an active text spotting system that uses a small number of wide angle views to locate putative text in the environment and then foveates and zooms onto that text in order to improve the reliability of text recognition. We present extensive experimental results obtained with a pan/tilt/zoom camera and a ROS-based mobile robot operating in an indoor environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a new image-based visual navigation algorithm that allows the Cartesian velocity of a robot to be defined with respect to a set of visually observed features corresponding to previously unseen and unmapped world points. The technique is well suited to mobile robot tasks such as moving along a road or flying over the ground. We describe the algorithm in general form and present detailed simulation results for an aerial robot scenario using a spherical camera and a wide angle perspective camera, and present experimental results for a mobile ground robot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Autonomous navigation and locomotion of a mobile robot in natural environments remain a rather open issue. Several functionalities are required to complete the usual perception/decision/action cycle. They can be divided in two main categories : navigation (perception and decision about the movement) and locomotion (movement execution). In order to be able to face the large range of possible situations in natural environments, it is essential to make use of various kinds of complementary functionalities, defining various navigation and locomotion modes. Indeed, a number of navigation and locomotion approaches have been proposed in the literature for the last years, but none can pretend being able to achieve autonomous navigation and locomotion in every situation. Thus, it seems relevant to endow an outdoor mobile robot with several complementary navigation and locomotion modes. Accordingly, the robot must also have means to select the most appropriate mode to apply. This thesis proposes the development of such a navigation/locomotion mode selection system, based on two types of data: an observation of the context to determine in what kind of situation the robot has to achieve its movement and an evaluation of the behavior of the current mode, made by monitors which influence the transitions towards other modes when the behavior of the current one is considered as non satisfying. Hence, this document introduces a probabilistic framework for the estimation of the mode to be applied, some navigation and locomotion modes used, a qualitative terrain representation method (based on the evaluation of a difficulty computed from the placement of the robot's structure on a digital elevation map), and monitors that check the behavior of the modes used (evaluation of rolling locomotion efficiency, robot's attitude and configuration watching. . .). Some experimental results obtained with those elements integrated on board two different outdoor robots are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper evaluates the performance of different text recognition techniques for a mobile robot in an indoor (university campus) environment. We compared four different methods: our own approach using existing text detection methods (Minimally Stable Extremal Regions detector and Stroke Width Transform) combined with a convolutional neural network, two modes of the open source program Tesseract, and the experimental mobile app Google Goggles. The results show that a convolutional neural network combined with the Stroke Width Transform gives the best performance in correctly matched text on images with single characters whereas Google Goggles gives the best performance on images with multiple words. The dataset used for this work is released as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we present an autonomous mobile ma- nipulator that is used to collect sample containers in an unknown environment. The manipulator is part of a team of heterogeneous mobile robots that are to search and identify sample containers in an unknown environment. A map of the environment along with possible positions of sample containers are shared between the robots in the team by using a cloud-based communication interface. To grasp a container with its manipulator arm the robot has to place itself in a position suitable for the manipulation task. This optimal base placement pose is selected by querying a precomputed inverse reachability database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Field and Service Robotics (FSR) conference is a single track conference with a specific focus on field and service applications of robotics technology. The goal of FSR is to report and encourage the development of field and service robotics. These are non-factory robots, typically mobile, that must operate in complex and dynamic environments. Typical field robotics applications include mining, agriculture, building and construction, forestry, cargo handling and so on. Field robots may operate on the ground (of Earth or planets), under the ground, underwater, in the air or in space. Service robots are those that work closely with humans, importantly the elderly and sick, to help them with their lives. The first FSR conference was held in Canberra, Australia, in 1997. Since then the meeting has been held every 2 years in Asia, America, Europe and Australia. It has been held in Canberra, Australia (1997), Pittsburgh, USA (1999), Helsinki, Finland (2001), Mount Fuji, Japan (2003), Port Douglas, Australia (2005), Chamonix, France (2007), Cambridge, USA (2009), Sendai, Japan (2012) and most recently in Brisbane, Australia (2013). This year we had 54 submissions of which 36 were selected for oral presentation. The organisers would like to thank the international committee for their invaluable contribution in the review process ensuring the overall quality of contributions. The organising committee would also like to thank Ben Upcroft, Felipe Gonzalez and Aaron McFadyen for helping with the organisation and proceedings. and proceedings. The conference was sponsored by the Australian Robotics and Automation Association (ARAA), CSIRO, Queensland University of Technology (QUT), Defence Science and Technology Organisation Australia (DSTO) and the Rio Tinto Centre for Mine Automation, University of Sydney.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing interest to autonomously collect or manipulate objects in remote or unknown environments, such as mountains, gullies, bush-land, or rough terrain. There are several limitations of conventional methods using manned or remotely controlled aircraft. The capability of small Unmanned Aerial Vehicles (UAV) used in parallel with robotic manipulators could overcome some of these limitations. By enabling the autonomous exploration of both naturally hazardous environments, or areas which are biologically, chemically, or radioactively contaminated, it is possible to collect samples and data from such environments without directly exposing personnel to such risks. This paper covers the design, integration, and initial testing of a framework for outdoor mobile manipulation UAV. The framework is designed to allow further integration and testing of complex control theories, with the capability to operate outdoors in unknown environments. The results obtained act as a reference for the effectiveness of the integrated sensors and low-level control methods used for the preliminary testing, as well as identifying the key technologies needed for the development of an outdoor capable system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we focus on the challenging problem of place categorization and semantic mapping on a robot with-out environment-specific training. Motivated by their ongoing success in various visual recognition tasks, we build our system upon a state-of-the-art convolutional network. We overcome its closed-set limitations by complementing the network with a series of one-vs-all classifiers that can learn to recognize new semantic classes online. Prior domain knowledge is incorporated by embedding the classification system into a Bayesian filter framework that also ensures temporal coherence. We evaluate the classification accuracy of the system on a robot that maps a variety of places on our campus in real-time. We show how semantic information can boost robotic object detection performance and how the semantic map can be used to modulate the robot’s behaviour during navigation tasks. The system is made available to the community as a ROS module.