931 resultados para Visual robot control
Resumo:
Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.
Resumo:
Recently a substantial amount of research has been done in the field of dextrous manipulation and hand manoeuvres. The main concern has been how to control robot hands so that they can execute manipulation tasks with the same dexterity and intuition as human hands. This paper surveys multi-fingered robot hand research and development topics which include robot hand design, object force distribution and control, grip transform, grasp stability and its synthesis, grasp stiffness and compliance motion and robot arm-hand coordination. Three main topics are presented in this article. The first is an introduction to the subject. The second concentrates on examples of mechanical manipulators used in research and the methods employed to control them. The third presents work which has been done on the field of object manipulation.
Resumo:
This paper presents recent developments to a vision-based traffic surveillance system which relies extensively on the use of geometrical and scene context. Firstly, a highly parametrised 3-D model is reported, able to adopt the shape of a wide variety of different classes of vehicle (e.g. cars, vans, buses etc.), and its subsequent specialisation to a generic car class which accounts for commonly encountered types of car (including saloon, batchback and estate cars). Sample data collected from video images, by means of an interactive tool, have been subjected to principal component analysis (PCA) to define a deformable model having 6 degrees of freedom. Secondly, a new pose refinement technique using “active” models is described, able to recover both the pose of a rigid object, and the structure of a deformable model; an assessment of its performance is examined in comparison with previously reported “passive” model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence. Typical applications for this work include robot surveillance and navigation tasks.
Resumo:
Smooth trajectories are essential for safe interaction in between human and a haptic interface. Different methods and strategies have been introduced to create such smooth trajectories. This paper studies the creation of human-like movements in haptic interfaces, based on the study of human arm motion. These motions are intended to retrain the upper limb movements of patients that lose manipulation functions following stroke. We present a model that uses higher degree polynomials to define a trajectory and control the robot arm to achieve minimum jerk movements. It also studies different methods that can be driven from polynomials to create more realistic human-like movements for therapeutic purposes.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
A three degrees of freedom industrial robot is controlled by applying PID self-tuning (PID/ST) controllers. This control is considered as a corrective term to a nominal value, centrally computed from an inaccurate and/ or simplified dynamic model. An identification scheme on an assumed linear plant describing the deviation from the desired trajectory is employed in order to tune the controller coefficients and thus accomplish a behaviour prescribed through a desired pole placement. A salient feature of our approach is the decentralized nature of the controllers producing the corrective term for each joint. This opens the way to practical implementation, as recent computing requirement calculations for similar set-ups have shown in the literature. Numerical results are presented.
Resumo:
The existence of a specialized imitation module in humans is hotly debated. Studies suggesting a specific imitation impairment in individuals with autism spectrum disorders (ASD) support a modular view. However, the voluntary imitation tasks used in these studies (which require socio-cognitive abilities in addition to imitation for successful performance) cannot support claims of a specific impairment. Accordingly, an automatic imitation paradigm (a ‘cleaner’ measure of imitative ability) was used to assess the imitative ability of 16 adults with ASD and 16 non-autistic matched control participants. Participants performed a prespecified hand action in response to observed hand actions performed either by a human or a robotic hand. On compatible trials the stimulus and response actions matched, while on incompatible trials the two actions did not match. Replicating previous findings, the Control group showed an automatic imitation effect: responses on compatible trials were faster than those on incompatible trials. This effect was greater when responses were made to human than to robotic actions (‘animacy bias’). The ASD group also showed an automatic imitation effect and a larger animacy bias than the Control group. We discuss these findings with reference to the literature on imitation in ASD and theories of imitation.
Resumo:
We investigated whether attention shifts and eye movement preparation are mediated by shared control mechanisms, as claimed by the premotor theory of attention. ERPs were recorded in three tasks where directional cues presented at the beginning of each trial instructed participants to direct their attention to the cued side without eye movements (Covert task), to prepare an eye movement in the cued direction without attention shifts (Saccade task) or both (Combined task). A peripheral visual Go/Nogo stimulus that was presented 800 ms after cue onset signalled whether responses had to be executed or withheld. Lateralised ERP components triggered during the cue–target interval, which are assumed to reflect preparatory control mechanisms that mediate attentional orienting, were very similar across tasks. They were also present in the Saccade task, which was designed to discourage any concomitant covert attention shifts. These results support the hypothesis that saccade preparation and attentional orienting are implemented by common control structures. There were however systematic differences in the impact of eye movement programming and covert attention on ERPs triggered in response to visual stimuli at cued versus uncued locations. It is concluded that, although the preparatory processes underlying saccade programming and covert attentional orienting may be based on common mechanisms, they nevertheless differ in their spatially specific effects on visual information processing.
Resumo:
The premotor theory of attention claims that attentional shifts are triggered during response programming, regardless of which response modality is involved. To investigate this claim, event-related brain potentials (ERPs) were recorded while participants covertly prepared a left or right response, as indicated by a precue presented at the beginning of each trial. Cues signalled a left or right eye movement in the saccade task, and a left or right manual response in the manual task. The cued response had to be executed or withheld following the presentation of a Go/Nogo stimulus. Although there were systematic differences between ERPs triggered during covert manual and saccade preparation, lateralised ERP components sensitive to the direction of a cued response were very similar for both tasks, and also similar to the components previously found during cued shifts of endogenous spatial attention. This is consistent with the claim that the control of attention and of covert response preparation are closely linked. N1 components triggered by task-irrelevant visual probes presented during the covert response preparation interval were enhanced when these probes were presented close to cued response hand in the manual task, and at the saccade target location in the saccade task. This demonstrates that both manual and saccade preparation result in spatially specific modulations of visual processing, in line with the predictions of the premotor theory.
Resumo:
Strokes affect thousands of people worldwide leaving sufferers with severe disabilities affecting their daily activities. In recent years, new rehabilitation techniques have emerged such as constraint-induced therapy, biofeedback therapy and robot-aided therapy. In particular, robotic techniques allow precise recording of movements and application of forces to the affected limb, making it a valuable tool for motor rehabilitation. In addition, robot-aided therapy can utilise visual cues conveyed on a computer screen to convert repetitive movement practice into an engaging task such as a game. Visual cues can also be used to control the information sent to the patient about exercise performance and to potentially address psychosomatic variables influencing therapy. This paper overviews the current state-of-the-art on upper limb robot-mediated therapy with a focal point on the technical requirements of robotic therapy devices leading to the development of upper limb rehabilitation techniques that facilitate reach-to-touch, fine motor control, whole-arm movements and promote rehabilitation beyond hospital stay. The reviewed literature suggest that while there is evidence supporting the use of this technology to reduce functional impairment, besides the technological push, the challenge ahead lies on provision of effective assessment of outcome and modalities that have a stronger impact transferring functional gains into functional independence.
Resumo:
Visual Telepresence system which utilize virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the display is fixed and is most suitable only for viewing elements of a scene at a particular distance. In such a system, the operator's ability to gaze around without use of head movement is severely limited. A trade off must be made between a poor viewing resolution or a narrow width of viewing field. To address these limitations a prototype system where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator has been developed. This paper explores the reasons why is necessary to actively adjust both the display system and the cameras and furthermore justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms, An assessment of the performance of the system against a fixed camera/display system when operators are assigned basic tasks involving depth and distance/size perception. The sensitivity to variations in transient performance of the display and camera vergence is also assessed.
Resumo:
Visual telepresence seeks to extend existing teleoperative capability by supplying the operator with a 3D interactive view of the remote environment. This is achieved through the use of a stereo camera platform which, through appropriate 3D display devices, provides a distinct image to each eye of the operator, and which is slaved directly from the operator's head and eye movements. However, the resolution within current head mounted displays remains poor, thereby reducing the operator's visual acuity. This paper reports on the feasibility of incorporation of eye tracking to increase resolution and investigates the stability and control issues for such a system. Continuous domain and discrete simulations are presented which indicates that eye tracking provides a stable feedback loop for tracking applications, though some empirical testing (currently being initiated) of such a system will be required to overcome indicated stability problems associated with micro saccades of the human operator.
Resumo:
Active robot force control requires some form of dynamic inner loop control for stability. The author considers the implementation of position-based inner loop control on an industrial robot fitted with encoders only. It is shown that high gain velocity feedback for such a robot, which is effectively stationary when in contact with a stiff environment, involves problems beyond the usual caveats on the effects of unknown environment stiffness. It is shown that it is possible for the controlled joint to become chaotic at very low velocities if encoder edge timing data are used for velocity measurement. The results obtained indicate that there is a lower limit on controlled velocity when encoders are the only means of joint measurement. This lower limit to speed is determined by the desired amount of loop gain, which is itself determined by the severity of the nonlinearities present in the drive system.
Resumo:
An experimental and theoretical comparison is made of force control performance with different types of innerloop joint servoing techniques. The problem of disturbance rejection and sensitivity to plant dynamics variations (robustness) is addressed. Position, velocity, strain gauge derived joint torque, and current servos are designed and implemented on a specially instrumented industrial robot, and the end-effector force feedback performances achieved are compared. Joint strain derived torque servoing is found to provide the best overall robust force control performance. Experimental results of the robust hard-on-hard contact achieved with the novel force controller implementation based on joint torque sensing are provided. Conclusions are drawn on the force control performance achievable on a geared robot given the joint servoing technique.