907 resultados para Robot navigation
Resumo:
In recent years researchers in the Department of Cybernetics have been developing simple mobile robots capable of exploring their environment on the basis of the information obtained from a few simple sensors. These robots are used as the test bed for exploring various behaviours of single and multiple organisms: the work is inspired by considerations of natural systems. In this paper we concentrate on that part of the work which involves neural networks and related techniques. These neural networks are used both to process the sensor information and to develop the strategy used to control the robot. Here the robots, their sensors, and the neural networks used and all described. 1.
Resumo:
This paper presents recent developments to a vision-based traffic surveillance system which relies extensively on the use of geometrical and scene context. Firstly, a highly parametrised 3-D model is reported, able to adopt the shape of a wide variety of different classes of vehicle (e.g. cars, vans, buses etc.), and its subsequent specialisation to a generic car class which accounts for commonly encountered types of car (including saloon, batchback and estate cars). Sample data collected from video images, by means of an interactive tool, have been subjected to principal component analysis (PCA) to define a deformable model having 6 degrees of freedom. Secondly, a new pose refinement technique using “active” models is described, able to recover both the pose of a rigid object, and the structure of a deformable model; an assessment of its performance is examined in comparison with previously reported “passive” model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence. Typical applications for this work include robot surveillance and navigation tasks.
Resumo:
Researchers in the rehabilitation engineering community have been designing and developing a variety of passive/active devices to help persons with limited upper extremity function to perform essential daily manipulations. Devices range from low-end tools such as head/mouth sticks to sophisticated robots using vision and speech input. While almost all of the high-end equipment developed to date relies on visual feedback alone to guide the user providing no tactile or proprioceptive cues, the “low-tech” head/mouth sticks deliver better “feel” because of the inherent force feedback through physical contact with the user's body. However, the disadvantage of a conventional head/mouth stick is that it can only function in a limited workspace and the performance is limited by the user's strength. It therefore seems reasonable to attempt to develop a system that exploits the advantages of the two approaches: the power and flexibility of robotic systems with the sensory feedback of a headstick. The system presented in this paper reflects the design philosophy stated above. This system contains a pair of master-slave robots with the master being operated by the user's head and the slave acting as a telestick. Described in this paper are the design, control strategies, implementation and performance evaluation of the head-controlled force-reflecting telestick system.
Resumo:
The aurora project is investigating the possibility of using a robotic platform as a therapy aid for--children with autism. Because of the nature of this disability, the robot could be beneficial in its ability--to present the children with a safe and comfortable environment and allow them to explore and learn--about the interaction space involved in social situations. The robotic platform is able to present--information along a limited number of channels and in a manner which the children are familiar with--from television and cartoons. Also, the robot is potentially able to adapt its behaviour and to allow the--children to develop at their own rates. Initial trial results are presented and discussed, along with the--rationale behind the project and its goals and motivations. The trial procedure and methodology are--explained and future work is highlighted.
Resumo:
Since 1998, the Aurora project has been investigating the use of a robotic platform as a tool for therapy use with children with autism. A key issue in this project is the evaluation of the interactions, which are not constricted and involve the child moving freely. Additionally, the response of the children is an important factor which must emerge from the robot trial sessions and the evaluation methodology, in order to guide further development work.
Resumo:
For individuals with upper-extremity motor disabilities, the head-stick is a simple and intuitive means of performing manipulations because it provides direct proprioceptive information to the user. Through practice and use of inherent proprioceptive cues, users may become quite adept at using the head-stick for a number of different tasks. The traditional head-stick is limited, however, to the user's achievable range of head motion and force generation, which may be insufficient for many tasks. The authors describe an interface to a robot system which emulates the proprioceptive qualities of a traditional head-stick while also allowing for augmented end-effector ranges of force and motion. The design and implementation of the system in terms of coordinate transforms, bilateral telemanipulator architecture, safety systems, and system identification of the master is described, in addition to preliminary evaluation results.
Resumo:
This paper presents a review of the design and development of the Yorick series of active stereo camera platforms and their integration into real-time closed loop active vision systems, whose applications span surveillance, navigation of autonomously guided vehicles (AGVs), and inspection tasks for teleoperation, including immersive visual telepresence. The mechatronic approach adopted for the design of the first system, including head/eye platform, local controller, vision engine, gaze controller and system integration, proved to be very successful. The design team comprised researchers with experience in parallel computing, robot control, mechanical design and machine vision. The success of the project has generated sufficient interest to sanction a number of revisions of the original head design, including the design of a lightweight compact head for use on a robot arm, and the further development of a robot head to look specifically at increasing visual resolution for visual telepresence. The controller and vision processing engines have also been upgraded, to include the control of robot heads on mobile platforms and control of vergence through tracking of an operator's eye movement. This paper details the hardware development of the different active vision/telepresence systems.
Resumo:
We introduce the perspex machine which unifies projective geometry and Turing computation and results in a supra-Turing machine. We show two ways in which the perspex machine unifies symbolic and non-symbolic AI. Firstly, we describe concrete geometrical models that map perspexes onto neural networks, some of which perform only symbolic operations. Secondly, we describe an abstract continuum of perspex logics that includes both symbolic logics and a new class of continuous logics. We argue that an axiom in symbolic logic can be the conclusion of a perspex theorem. That is, the atoms of symbolic logic can be the conclusions of sub-atomic theorems. We argue that perspex space can be mapped onto the spacetime of the universe we inhabit. This allows us to discuss how a robot might be conscious, feel, and have free will in a deterministic, or semi-deterministic, universe. We ground the reality of our universe in existence. On a theistic point, we argue that preordination and free will are compatible. On a theological point, we argue that it is not heretical for us to give robots free will. Finally, we give a pragmatic warning as to the double-edged risks of creating robots that do, or alternatively do not, have free will.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.