269 resultados para Anthropomorphic robots
Resumo:
Poem.
Resumo:
RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM’s pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publicly available open-source datasets.
Resumo:
This paper describes an architecture for robotic telepresence and teleoperation based on the well known tools ROS and Skype. We discuss how Skype can be used as a framework for robotic communication and can be integrated into a ROS/Linux framework to allow a remote user to not only interact with people near the robot, but to view maps, sensory data, robot pose and to issue commands to the robot’s navigation stack. This allows the remote user to exploit the robot’s autonomy, providing a much more convenient navigation interface than simple remote joysticking.
Resumo:
Service robots that operate in human environments will accomplish tasks most efficiently and least disruptively if they have the capability to mimic and understand the motion patterns of the people in their workspace. This work demonstrates how a robot can create a humancentric navigational map online, and that this map re ects changes in the environment that trigger altered motion patterns of people. An RGBD sensor mounted on the robot is used to detect and track people moving through the environment. The trajectories are clustered online and organised into a tree-like probabilistic data structure which can be used to detect anomalous trajectories. A costmap is reverse engineered from the clustered trajectories that can then inform the robot's onboard planning process. Results show that the resultant paths taken by the robot mimic expected human behaviour and can allow the robot to respond to altered human motion behaviours in the environment.
Resumo:
Changing environments present a number of challenges to mobile robots, one of the most significant being mapping and localisation. This problem is particularly significant in vision-based systems where illumination and weather changes can cause feature-based techniques to fail. In many applications only sections of an environment undergo extreme perceptual change. Some range-based sensor mapping approaches exploit this property by combining occasional place recognition with the assumption that odometry is accurate over short periods of time. In this paper, we develop this idea in the visual domain, by using occasional vision-driven loop closures to infer loop closures in nearby locations where visual recognition is difficult due to extreme change. We demonstrate successful map creation in an environment in which change is significant but constrained to one area, where both the vanilla CAT-Graph and a Sum of Absolute Differences matcher fails, use the described techniques to link dissimilar images from matching locations, and test the robustness of the system against false inferences.
Resumo:
The main objective of this paper is to describe the development of a remote sensing airborne air sampling system for Unmanned Aerial Systems (UAS) and provide the capability for the detection of particle and gas concentrations in real time over remote locations. The design of the air sampling methodology started by defining system architecture, and then by selecting and integrating each subsystem. A multifunctional air sampling instrument, with capability for simultaneous measurement of particle and gas concentrations was modified and integrated with ARCAA’s Flamingo UAS platform and communications protocols. As result of the integration process, a system capable of both real time geo-location monitoring and indexed-link sampling was obtained. Wind tunnel tests were conducted in order to evaluate the performance of the air sampling instrument in controlled nonstationary conditions at the typical operational velocities of the UAS platform. Once the remote fully operative air sampling system was obtained, the problem of mission design was analyzed through the simulation of different scenarios. Furthermore, flight tests of the complete air sampling system were then conducted to check the dynamic characteristics of the UAS with the air sampling system and to prove its capability to perform an air sampling mission following a specific flight path.
Resumo:
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.
Resumo:
Biorobotics has the potential to provide an integrated understanding from neural systems to behavior that is neither ethical nor technically feasible with living systems. Robots that can interact with animals in their natural environment open new possibilities for empirical studies in neuroscience. However, designing a robot that can interact with a rodent requires considerations that span a range of disciplines. For the rat's safety, the body form and movements of the robot need to take into consideration the safety of the animal, an appropriate size for the rodent arenas, and behaviors for interaction. For the robot's safety, its form must be robust in the face of typically inquisitive and potentially aggressive behaviors by the rodent, which can include chewing on exposed parts, including electronics, and deliberate or accidental fouling. We designed a rat-sized robot, the iRat (intelligent rat animat technology) for studies in neuroscience. The iRat is about the same size as a rat and has the ability to navigate autonomously around small environments. In this study we report the first interactions between the iRat and real rodents in a free exploration task. Studies with five rats show that the rats and iRat interact safely for both parties.
Resumo:
For humans and robots to communicate using natural language it is necessary for the robots to develop concepts and associated terms that correspond to the human use of words. Time and space are foundational concepts in human language, and to develop a set of words that correspond to human notions of time and space, it is necessary to take into account the way that they are used in natural human conversations, where terms and phrases such as `soon', `in a while', or `near' are often used. We present language learning robots called Lingodroids that can learn and use simple terms for time and space. In previous work, the Lingodroids were able to learn terms for space. In this work we extend their abilities by adding temporal variables which allow them to learn terms for time. The robots build their own maps of the world and interact socially to form a shared lexicon for location and duration terms. The robots successfully use the shared lexicons to communicate places and times to meet again.
Resumo:
Time and space are fundamental to human language and embodied cognition. In our early work we investigated how Lingodroids, robots with the ability to build their own maps, could evolve their own geopersonal spatial language. In subsequent studies we extended the framework developed for learning spatial concepts and words to learning temporal intervals. This paper considers a new aspect of time, the naming of concepts like morning, afternoon, dawn, and dusk, which are events that are part of day-night cycles, but are not defined by specific time points on a clock. Grounding of such terms refers to events and features of the diurnal cycle, such as light levels. We studied event-based time in which robots experienced day-night cycles that varied with the seasons throughout a year. Then we used meet-at tasks to demonstrate that the words learned were grounded, where the times to meet were morning and afternoon, rather than specific clock times. The studies show how words and concepts for a novel aspect of cyclic time can be grounded through experience with events rather than by times as measured by clocks or calendars
Resumo:
Purpose – The article aims to review a university course, offered to students in both Australia and Germany, to encourage them to learn about designing, implementing, marketing and evaluating information programs and services in order to build active and engaged communities. The concepts and processes of Web 2.0 technologies come together in the learning activities, with students establishing their own personal learning networks (PLNs). Design/methodology/approach – The case study examines the principles of learning and teaching that underpin the course and presents the students' own experiences of the challenges they faced as they explored the interactive, participative and collaborative dimensions of the web. Findings – The online format of the course and the philosophy of learning through play provided students with a safe and supportive environment for them to move outside of their comfort zones, to be creative, to experiment and to develop their professional personas. Reflection on learning was a key component that stressed the value of reflective practice in assisting library and information science (LIS) professionals to adapt confidently to the rapidly changing work environment. Originality/value – This study provides insights into the opportunities for LIS courses to work across geographical boundaries, to allow students to critically appraise library practice in different contexts and to become active participants in wider professional networks.
Resumo:
The building sector is the dominant consumer of energy and therefore a major contributor to anthropomorphic climate change. The rapid generation of photorealistic, 3D environment models with incorporated surface temperature data has the potential to improve thermographic monitoring of building energy efficiency. In pursuit of this goal, we propose a system which combines a range sensor with a thermal-infrared camera. Our proposed system can generate dense 3D models of environments with both appearance and temperature information, and is the first such system to be developed using a low-cost RGB-D camera. The proposed pipeline processes depth maps successively, forming an ongoing pose estimate of the depth camera and optimizing a voxel occupancy map. Voxels are assigned 4 channels representing estimates of their true RGB and thermal-infrared intensity values. Poses corresponding to each RGB and thermal-infrared image are estimated through a combination of timestamp-based interpolation and a pre-determined knowledge of the extrinsic calibration of the system. Raycasting is then used to color the voxels to represent both visual appearance using RGB, and an estimate of the surface temperature. The output of the system is a dense 3D model which can simultaneously represent both RGB and thermal-infrared data using one of two alternative representation schemes. Experimental results demonstrate that the system is capable of accurately mapping difficult environments, even in complete darkness.
Resumo:
This paper demonstrates, following Vygotsky, that language and tool use has a critical role in the collaborative problem-solving behaviour of school-age children. It reports original ethnographic classroom research examining the convergence of speech and practical activity in children’s collaborative problem solving with robotics programming tasks. The researchers analysed children’s interactions during a series of problem solving experiments in which Lego Mindstorms toolsets were used by teachers to create robotics design challenges among 24 students in a Year 4 Australian classroom (students aged 8.5–9.5 years). The design challenges were incrementally difficult, beginning with basic programming of straight line movement, and progressing to more complex challenges involving programming of the robots to raise Lego figures from conduit pipes using robots as pulleys with string and recycled materials. Data collection involved micro-genetic analysis of students’ speech interactions with tools, peers, and other experts, teacher interviews, and student focus group data. Coding the repeated patterns in the transcripts, the authors outline the structure of the children’s social speech in joint problem solving, demonstrating the patterns of speech and interaction that play an important role in the socialisation of the school-age child’s practical intellect.
Resumo:
In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.
Resumo:
This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.