841 resultados para Robot motion
Resumo:
This paper provides some additional evidence in support of the hypothesis that robot therapies are clinically beneficial in neurorehabilitation. Although only 4 subjects were included in the study, the design of the intervention and the measures were done so as to minimise bias. The results are presented as single case studies, and can only be interpreted as such due to the study size. The intensity of intervention was 16 hours and the therapy philosophy (based on Carr and Shepherd) was that coordinated movements are preferable to joint based therapies, and that coordinating distal movements (in this case grasps) helps not only to recover function in these areas, but has greater value since the results are immediately transferable to daily skills such as reach and grasp movements.
Resumo:
Visual motion cues play an important role in animal and humans locomotion without the need to extract actual ego-motion information. This paper demonstrates a method for estimating the visual motion parameters, namely the Time-To-Contact (TTC), Focus of Expansion (FOE), and image angular velocities, from a sparse optical flow estimation registered from a downward looking camera. The presented method is capable of estimating the visual motion parameters in a complicated 6 degrees of freedom motion and in real time with suitable accuracy for mobile robots visual navigation.
Resumo:
This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
This paper presents a neuroscience inspired information theoretic approach to motion segmentation. Robust motion segmentation represents a fundamental first stage in many surveillance tasks. As an alternative to widely adopted individual segmentation approaches, which are challenged in different ways by imagery exhibiting a wide range of environmental variation and irrelevant motion, this paper presents a new biologically-inspired approach which computes the multivariate mutual information between multiple complementary motion segmentation outputs. Performance evaluation across a range of datasets and against competing segmentation methods demonstrates robust performance.
Resumo:
This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
Unorganized traffic is a generalized form of travel wherein vehicles do not adhere to any predefined lanes and can travel in-between lanes. Such travel is visible in a number of countries e.g. India, wherein it enables a higher traffic bandwidth, more overtaking and more efficient travel. These advantages are visible when the vehicles vary considerably in size and speed, in the absence of which the predefined lanes are near-optimal. Motion planning for multiple autonomous vehicles in unorganized traffic deals with deciding on the manner in which every vehicle travels, ensuring no collision either with each other or with static obstacles. In this paper the notion of predefined lanes is generalized to model unorganized travel for the purpose of planning vehicles travel. A uniform cost search is used for finding the optimal motion strategy of a vehicle, amidst the known travel plans of the other vehicles. The aim is to maximize the separation between the vehicles and static obstacles. The search is responsible for defining an optimal lane distribution among vehicles in the planning scenario. Clothoid curves are used for maintaining a lane or changing lanes. Experiments are performed by simulation over a set of challenging scenarios with a complex grid of obstacles. Additionally behaviours of overtaking, waiting for a vehicle to cross and following another vehicle are exhibited.
Resumo:
The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.
Resumo:
In this paper, we investigate the possibility to control a mobile robot via a sensory-motory coupling utilizing diffusion system. For this purpose, we implemented a simulation of the diffusion process of chemicals and the kinematics of the mobile robot. In comparison to the original Braitenberg vehicle in which sensorymotor coupling is tightly realised by hardwiring, our system employs the soft coupling. The mobile robot has two sets of independent sensory-motor unit, two sensors are implemented in front and two motors on each side of the robot. The framework used for the sensory-motor coupling was such that 1) Place two electrodes in the medium 2) Drop a certain amount of Chemical U and V related to the distance to the walls and the intensity of the light 3) Place other two electrodes in the medium 4) Measure the concentration of Chemical U and V to actuate the motors on both sides of the robot. The environment was constructed with four surrounding walls and a light source located at the center. Depending on the design parameters and initial conditions, the robot was able to successfully avoid the wall and light. More interestingly, the diffusion process in the sensory-motor coupling provided the robot with a simple form of memory which would not have been possible with a control framework based on a hard-wired electric circuit.
Resumo:
Using data from the EISCAT (European Incoherent Scatter) VHF and CUTLASS (Co-operative UK Twin- Located Auroral Sounding System) HF radars, we study the formation of ionospheric polar cap patches and their relationship to the magnetopause reconnection pulses identified in the companion paper by Lockwood et al. (2005). It is shown that the poleward-moving, high-concentration plasma patches observed in the ionosphere by EISCAT on 23 November 1999, as reported by Davies et al. (2002), were often associated with corresponding reconnection rate pulses. However, not all such pulses generated a patch and only within a limited MLT range (11:00–12:00 MLT) did a patch result from a reconnection pulse. Three proposed mechanisms for the production of patches, and of the concentration minima that separate them, are analysed and evaluated: (1) concentration enhancement within the patches by cusp/cleft precipitation; (2) plasma depletion in the minima between the patches by fast plasma flows; and (3) intermittent injection of photoionisation-enhanced plasma into the polar cap. We devise a test to distinguish between the effects of these mechanisms. Some of the events repeat too frequently to apply the test. Others have sufficiently long repeat periods and mechanism (3) is shown to be the only explanation of three of the longer-lived patches seen on this day. However, effect (2) also appears to contribute to some events. We conclude that plasma concentration gradients on the edges of the larger patches arise mainly from local time variations in the subauroral plasma, via the mechanism proposed by Lockwood et al. (2000).
Resumo:
Using data from the EISCAT (European Incoherent Scatter) VHF radar and DMSP (Defense Meteorological Satellite Program) spacecraft passes, we study the motion of the dayside open-closed field line boundary during two substorm cycles. The satellite data show that the motions of ion and electron temperature boundaries in EISCAT data, as reported by Moen et al. (2004), are not localised around the radar; rather, they reflect motions of the open-closed field line boundary at all MLT throughout the dayside auroral ionosphere. The boundary is shown to erode equatorward when the IMF points southward, consistent with the effect of magnetopause reconnection. During the substorm expansion and recovery phases, the dayside boundary returns poleward, whether the IMF points northward or southward. However, the poleward retreat was much faster during the substorm for which the IMF had returned to northward than for the substorm for which the IMF remained southward – even though the former substorm is much the weaker of the two. These poleward retreats are consistent with the destruction of open flux at the tail current sheet. Application of a new analysis of the peak ion energies at the equatorward edge of the cleft/cusp/mantle dispersion seen by the DMSP satellites identifies the dayside reconnection merging gap to extend in MLT from about 9.5 to 15.5 h for most of the interval. Analysis of the boundary motion, and of the convection velocities seen near the boundary by EISCAT, allows calculation of the reconnection rate (mapped down to the ionosphere) from the flow component normal to the boundary in its own rest frame. This reconnection rate is not, in general, significantly different from zero before 06:45 UT (MLT<9.5 h) – indicating that the X line footprint expands over the EISCAT field-of-view to earlier MLT only occasionally and briefly. Between 06:45 UT and 12:45UT (9.5
Resumo:
Data from the Dynamics Explorer 1 satellite and the EISCAT and Sondrestrom incoherent scatter radars, have allowed a study of low-energy ion outflows from the ionosphere into the magnetosphere during a rapid expansion of the polar cap. From the combined radar data, a 200kV increase in cross-cap potential is estimated. The upflowing ions show “X” signatures in the pitch angle-time spectrograms in the expanding midnight sector of the auroral oval. These signatures reveal low-energy (below about 60eV), light-ion beams sandwiched between two regions of ion conics and are associated with inverted-V electron precipitation. The lack of mass dispersion of the poleward edge of the event, despite great differences in the times of flight, reflects the equatorward expansion of the acceleration regions at velocities similar to those of the antisunward convection. In addition, a transient burst of upflow of 0+ is observed within the cap, possibly due to enhanced Joule heating during the event.
Resumo:
Learning to talk about motion in a second language is very difficult because it involves restructuring deeply entrenched patterns from the first language (Slobin 1996). In this paper we argue that statistical learning (Saffran et al. 1997) can explain why L2 learners are only partially successful in restructuring their second language grammars. We explore to what extent L2 learners make use of two mechanisms of statistical learning, entrenchment and pre-emption (Boyd and Goldberg 2011) to acquire target-like expressions of motion and retreat from overgeneralisation in this domain. Paying attention to the frequency of existing patterns in the input can help learners to adjust the frequency with which they use path and manner verbs in French but is insufficient to acquire the boundary crossing constraint (Slobin and Hoiting 1994) and learn what not to say. We also look at the role of language proficiency and exposure to French in explaining the findings.
Resumo:
Awareness of emerging situations in a dynamic operational environment of a robotic assistive device is an essential capability of such a cognitive system, based on its effective and efficient assessment of the prevailing situation. This allows the system to interact with the environment in a sensible (semi)autonomous / pro-active manner without the need for frequent interventions from a supervisor. In this paper, we report a novel generic Situation Assessment Architecture for robotic systems directly assisting humans as developed in the CORBYS project. This paper presents the overall architecture for situation assessment and its application in proof-of-concept Demonstrators as developed and validated within the CORBYS project. These include a robotic human follower and a mobile gait rehabilitation robotic system. We present an overview of the structure and functionality of the Situation Assessment Architecture for robotic systems with results and observations as collected from initial validation on the two CORBYS Demonstrators.
Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach
Resumo:
Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.