998 resultados para Robotic vision


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Measuring gases for environmental monitoring is a demanding task that requires long periods of observation and large numbers of sensors. Wireless Sensor Networks (WSNs) and Unmanned Aerial Vehicles (UAVs) currently represent the best alternative to monitor large, remote, and difficult access areas, as these technologies have the possibility of carrying specialized gas sensing systems. This paper presents the development and integration of a WSN and an UAV powered by solar energy in order to enhance their functionality and broader their applications. A gas sensing system implementing nanostructured metal oxide (MOX) and non-dispersive infrared sensors was developed to measure concentrations of CH4 and CO2. Laboratory, bench and field testing results demonstrate the capability of UAV to capture, analyze and geo-locate a gas sample during flight operations. The field testing integrated ground sensor nodes and the UAV to measure CO2 concentration at ground and low aerial altitudes, simultaneously. Data collected during the mission was transmitted in real time to a central node for analysis and 3D mapping of the target gas. The results highlights the accomplishment of the first flight mission of a solar powered UAV equipped with a CO2 sensing system integrated with a WSN. The system provides an effective 3D monitoring and can be used in a wide range of environmental applications such as agriculture, bushfires, mining studies, zoology and botanical studies using a ubiquitous low cost technology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new metric, which we call the lighting variance ratio, for quantifying descriptors in terms of their variance to illumination changes. In many applications it is desirable to have descriptors that are robust to changes in illumination, especially in outdoor environments. The lighting variance ratio is useful for comparing descriptors and determining if a descriptor is lighting invariant enough for a given environment. The metric is analysed across a number of datasets, cameras and descriptors. The results show that the upright SIFT descriptor is typically the most lighting invariant descriptor.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Changing environments pose a serious problem to current robotic systems aiming at long term operation under varying seasons or local weather conditions. This paper is built on our previous work where we propose to learn to predict the changes in an environment. Our key insight is that the occurring scene changes are in part systematic, repeatable and therefore predictable. The goal of our work is to support existing approaches to place recognition by learning how the visual appearance of an environment changes over time and by using this learned knowledge to predict its appearance under different environmental conditions. We describe the general idea of appearance change prediction (ACP) and investigate properties of our novel implementation based on vocabularies of superpixels (SP-ACP). Our previous work showed that the proposed approach significantly improves the performance of SeqSLAM and BRIEF-Gist for place recognition on a subset of the Nordland dataset under extremely different environmental conditions in summer and winter. This paper deepens the understanding of the proposed SP-ACP system and evaluates the influence of its parameters. We present the results of a large-scale experiment on the complete 10 h Nordland dataset and appearance change predictions between different combinations of seasons.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2013, ten teams from German universities and research institutes participated in a national robot competition called SpaceBot Cup organized by the DLR Space Administration. The robots had one hour to autonomously explore and map a challenging Mars-like environment, find, transport, and manipulate two objects, and navigate back to the landing site. Localization without GPS in an unstructured environment was a major issue as was mobile manipulation and very restricted communication. This paper describes our system of two rovers operating on the ground plus a quadrotor UAV simulating an observing orbiting satellite. We relied on ROS (robot operating system) as the software infrastructure and describe the main ROS components utilized in performing the tasks. Despite (or because of) faults, communication loss and breakdowns, it was a valuable experience with many lessons learned.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We show that the parallax motion resulting from non-nodal rotation in panorama capture can be exploited for light field construction from commodity hardware. Automated panoramic image capture typically seeks to rotate a camera exactly about its nodal point, for which no parallax motion is observed. This can be difficult or impossible to achieve due to limitations of the mounting or optical systems, and consequently a wide range of captured panoramas suffer from parallax between images. We show that by capturing such imagery over a regular grid of camera poses, then appropriately transforming the captured imagery to a common parameterisation, a light field can be constructed. The resulting four-dimensional image encodes scene geometry as well as texture, allowing an increasingly rich range of light field processing techniques to be applied. Employing an Ocular Robotics REV25 camera pointing system, we demonstrate light field capture,refocusing and low-light image enhancement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles - other objects detected in the visual stream - while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present our work on tele-operating a complex humanoid robot with the help of bio-signals collected from the operator. The frameworks (for robot vision, collision avoidance and machine learning), developed in our lab, allow for a safe interaction with the environment, when combined. This even works with noisy control signals, such as, the operator’s hand acceleration and their electromyography (EMG) signals. These bio-signals are used to execute equivalent actions (such as, reaching and grasping of objects) on the 7 DOF arm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present for the first time a complete symbolic navigation system that performs goal-directed exploration to unfamiliar environments on a physical robot. We introduce a novel construct called the abstract map to link provided symbolic spatial information with observed symbolic information and actual places in the real world. Symbolic information is observed using a text recognition system that has been developed specifically for the application of reading door labels. In the study described in this paper, the robot was provided with a floor plan and a destination. The destination was specified by a room number, used both in the floor plan and on the door to the room. The robot autonomously navigated to the destination using its text recognition, abstract map, mapping, and path planning systems. The robot used the symbolic navigation system to determine an efficient path to the destination, and reached the goal in two different real-world environments. Simulation results show that the system reduces the time required to navigate to a goal when compared to random exploration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using cameras onboard a robot for detecting a coloured stationary target outdoors is a difficult task. Apart from the complexity of separating the target from the background scenery over different ranges, there are also the inconsistencies with direct and reflected illumination from the sun,clouds, moving and stationary objects. They can vary both the illumination on the target and its colour as perceived by the camera. In this paper, we analyse the effect of environment conditions, range to target, camera settings and image processing on the reported colours of various targets. The analysis indicates the colour space and camera configuration that provide the most consistent colour values over varying environment conditions and ranges. This information is used to develop a detection system that provides range and bearing to detected targets. The system is evaluated over various lighting conditions from bright sunlight, shadows and overcast days and demonstrates robust performance. The accuracy of the system is compared against a laser beacon detector with preliminary results indicating it to be a valuable asset for long-range coloured target detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real time, using corners as object tokens. Local image-plane constraints are employed to solve the correspondence problem removing the need for a 3D motion model. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. The technique is novel in that feature detection and tracking is restricted to areas likely to contain meaningful image structure. Feature instantiation regions are defined from a combination of odometry informatin and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Preliminary experiments on a parallel (transputer) architecture indication that real-time operation is achievable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper is not about the details of yet another robot control system, but rather the issues surrounding realworld robotic implementation. It is a fact that in order to realise a future where robots co-exist with people in everyday places, we have to pass through a developmental phase that involves some risk. Putting a “Keep Out, Experiment in Progress” sign on the door is no longer possible since we are now at a level of capability that requires testing over long periods of time in complex realistic environments that contain people. We all know that controlling the risk is important – a serious accident could set the field back globally – but just as important is convincing others that the risks are known and controlled. In this article, we describe our experience going down this path and we show that mobile robotics research health and safety assessment is still unexplored territory in universities and is often ignored. We hope that the article will make robotics research labs in universities around the world take note of these issues rather than operating under the radar to prevent any catastrophic accidents.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Field and Service Robotics (FSR) conference is a single track conference with a specific focus on field and service applications of robotics technology. The goal of FSR is to report and encourage the development of field and service robotics. These are non-factory robots, typically mobile, that must operate in complex and dynamic environments. Typical field robotics applications include mining, agriculture, building and construction, forestry, cargo handling and so on. Field robots may operate on the ground (of Earth or planets), under the ground, underwater, in the air or in space. Service robots are those that work closely with humans, importantly the elderly and sick, to help them with their lives. The first FSR conference was held in Canberra, Australia, in 1997. Since then the meeting has been held every 2 years in Asia, America, Europe and Australia. It has been held in Canberra, Australia (1997), Pittsburgh, USA (1999), Helsinki, Finland (2001), Mount Fuji, Japan (2003), Port Douglas, Australia (2005), Chamonix, France (2007), Cambridge, USA (2009), Sendai, Japan (2012) and most recently in Brisbane, Australia (2013). This year we had 54 submissions of which 36 were selected for oral presentation. The organisers would like to thank the international committee for their invaluable contribution in the review process ensuring the overall quality of contributions. The organising committee would also like to thank Ben Upcroft, Felipe Gonzalez and Aaron McFadyen for helping with the organisation and proceedings. and proceedings. The conference was sponsored by the Australian Robotics and Automation Association (ARAA), CSIRO, Queensland University of Technology (QUT), Defence Science and Technology Organisation Australia (DSTO) and the Rio Tinto Centre for Mine Automation, University of Sydney.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents visual detection and classification of light vehicles and personnel on a mine site.We capitalise on the rapid advances of ConvNet based object recognition but highlight that a naive black box approach results in a significant number of false positives. In particular, the lack of domain specific training data and the unique landscape in a mine site causes a high rate of errors. We exploit the abundance of background-only images to train a k-means classifier to complement the ConvNet. Furthermore, localisation of objects of interest and a reduction in computation is enabled through region proposals. Our system is tested on over 10km of real mine site data and we were able to detect both light vehicles and personnel. We show that the introduction of our background model can reduce the false positive rate by an order of magnitude.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This proposal describes the innovative and competitive lunar payload solution developed at the Queensland University of Technology (QUT)–the LunaRoo: a hopping robot designed to exploit the Moon's lower gravity to leap up to 20m above the surface. It is compact enough to fit within a 10cm cube, whilst providing unique observation and mission capabilities by creating imagery during the hop. This first section is deliberately kept short and concise for web submission; additional information can be found in the second chapter.