112 resultados para Robotic mapping
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
The main objective is to develop methods that automatically generate kinematic models for the movements of biological and robotic systems. Two methods for the identification of the kinematics are presented. The first method requires the elimination of the displacement variables that cannot be measured while the second method attempts to estimate the changes in these variables. The methods were tested using a planar two-revolute-joint linkage. Results show that the model parameters obtained agree with the actual parameters to within 5%. Moreover, the methods were applied to model head and neck movements in the sagittal plane. The results indicate that these movements are well modeled by a two-revolute-joint system. A spatial three-revolute-joint model was also discussed and tested.
Resumo:
Increasingly socially intelligent agents (software or robotic) are used in education, rehabilitation and therapy. This paper discusses the role of interactive, mobile robots as social mediators in the particular domain of autism therapy. This research is part of the project AURORA that studies how mobile robots can be used to teach children with autism basic interaction skills that are important in social interactions among humans. Results from a particular series of trials involving pairs of two children and a mobile robot are described. The results show that the scenario with pairs of children and a robot creates a very interesting social context which gives rise to a variety of different social and non-social interaction patterns, demonstrating the specific problems but also abilities of children with autism in social interactions. Future work will include a closer analysis of interactional structure in human-human and robot-human interaction. We outline a particular framework that we are investigating.
Resumo:
The ‘action observation network’ (AON), which is thought to translate observed actions into motor codes required for their execution, is biologically tuned: it responds more to observation of human, than non-human, movement. This biological specificity has been taken to support the hypothesis that the AON underlies various social functions, such as theory of mind and action understanding, and that, when it is active during observation of non-human agents like humanoid robots, it is a sign of ascription of human mental states to these agents. This review will outline evidence for biological tuning in the AON, examining the features which generate it, and concluding that there is evidence for tuning to both the form and kinematic profile of observed movements, and little evidence for tuning to belief about stimulus identity. It will propose that a likely reason for biological tuning is that human actions, relative to non-biological movements, have been observed more frequently while executing corresponding actions. If the associative hypothesis of the AON is correct, and the network indeed supports social functioning, sensorimotor experience with non-human agents may help us to predict, and therefore interpret, their movements.
Resumo:
A neural network was used to map three PID operating regions for a two-input two-output steam generator system. The network was used in stand alone feedforward operation to control the whole operating range of the process, after being trained from the PID controllers corresponding to each control region. The network inputs are the plant error signals, their integral, their derivative and a 4-error delay train.
Resumo:
The paper describes a self-tuning adaptive PID controller suitable for use in the control of robotic manipulators. The scheme employs a simple recursive estimator which reduces the computational effort to an acceptable level for many applications in robotics.
Resumo:
This study investigated a new treatment in which sentence production abilities were trained in a small group of individuals and nonfluent aphasia. It was based upon a mapping therapy approach which holds that sentence production and comprehension impairments are due to difficulties in mapping between the meaning form (thematic roles) and the syntactic form of sentences. We trained production of both canonical and noncanonical reversible sentences.Three patients received treatment and two served as control participants. Patients who received treatment demonstrated acquisition of all trained sentence structures. They also demonstrated across-task generalisation of treated and some untreated sentence structures on two tasks of constrained sentence production, and showed some improvements on a narrative task. One control participant improved on some of these measures and the other did not. There was no noted improvement in sentence comprehension abilities following treatment. Results are discussed with reference to the heterogeneity of underlying impairments in sentence production impairments in nonfluent patients, and the possible mechanisms by which improvement in sentence production might have been achieved in treatment.
Resumo:
Recent research in cognitive neuroscience has found that observation of human actions activates the ‘mirror system’ and provokes automatic imitation to a greater extent than observation of non-biological movements. The present study investigated whether this human bias depends primarily on phylogenetic or ontogenetic factors by examining the effects of sensorimotor experience on automatic imitation of non-biological robotic, stimuli. Automatic imitation of human and robotic action stimuli was assessed before and after training. During these test sessions, participants were required to execute a pre-specified response (e.g. to open their hand) while observing a human or robotic hand making a compatible (opening) or incompatible (closing) movement. During training, participants executed opening and closing hand actions while observing compatible (group CT) or incompatible movements (group IT) of a robotic hand. Compatible, but not incompatible, training increased automatic imitation of robotic stimuli (speed of responding on compatible trials, compared with incompatible trials) and abolished the human bias observed at pre-test. These findings suggest that the development of the mirror system depends on sensorimotor experience, and that, in our species, it is biased in favour of human action stimuli because these are more abundant than non-biological action stimuli in typical developmental environments.
Resumo:
Visual observation of human actions provokes more motor activation than observation of robotic actions. We investigated the extent to which this visuomotor priming effect is mediated by bottom-up or top-down processing. The bottom-up hypothesis suggests that robotic movements are less effective in activating the ‘mirror system’ via pathways from visual areas via the superior temporal sulcus to parietal and premotor cortices. The top-down hypothesis postulates that beliefs about the animacy of a movement stimulus modulate mirror system activity via descending pathways from areas such as the temporal pole and prefrontal cortex. In an automatic imitation task, subjects performed a prespecified movement (e.g. hand opening) on presentation of a human or robotic hand making a compatible (opening) or incompatible (closing) movement. The speed of responding on compatible trials, compared with incompatible trials, indexed visuomotor priming. In the first experiment, robotic stimuli were constructed by adding a metal and wire ‘wrist’ to a human hand. Questionnaire data indicated that subjects believed these movements to be less animate than those of the human stimuli but the visuomotor priming effects of the human and robotic stimuli did not differ. In the second experiment, when the robotic stimuli were more angular and symmetrical than the human stimuli, human movements elicited more visuomotor priming than the robotic movements. However, the subjects’ beliefs about the animacy of the stimuli did not affect their performance. These results suggest that bottom-up processing is primarily responsible for the visuomotor priming advantage of human stimuli.
Resumo:
Recent behavioural and neuroimaging studies have found that observation of human movement, but not of robotic movement, gives rise to visuomotor priming. This implies that the 'mirror neuron' or 'action observation–execution matching' system in the premotor and parietal cortices is entirely unresponsive to robotic movement. The present study investigated this hypothesis using an 'automatic imitation' stimulus–response compatibility procedure. Participants were required to perform a prespecified movement (e.g. opening their hand) on presentation of a human or robotic hand in the terminal posture of a compatible movement (opened) or an incompatible movement (closed). Both the human and the robotic stimuli elicited automatic imitation; the prespecified action was initiated faster when it was cued by the compatible movement stimulus than when it was cued by the incompatible movement stimulus. However, even when the human and robotic stimuli were of comparable size, colour and brightness, the human hand had a stronger effect on performance. These results suggest that effector shape is sufficient to allow the action observation–matching system to distinguish human from robotic movement. They also indicate, as one would expect if this system develops through learning, that to varying degrees both human and robotic action can be 'simulated' by the premotor and parietal cortices.