978 resultados para Robot motion
Resumo:
本文针对多连杆柔性机械臂的运动轨迹控制问题,讨论了动力学建模、控制系统结构设计以及鲁棒自适应控制算法,运用假设模态方法得到了柔性机械臂动力学近似方程,通过对柔性机械臂动力学特性分析,建立了等价动力学模型,依此提出了一种鲁棒自适应控制算法,并给出了仿真研究结果。
Resumo:
本文以863-512型号项目为背景,从运动特性、运动描述、运动控制以及运动规划等几个方面研究履带式移动机器人的行动规划技术;首先从理论上分析了履带式移动机器人的内在运动传递机理,指出了其区别于轮式移动载体的独特的运动特性,尤其是在其转向特性方面,得出了履带式移动机器人运动角速度几乎不可控原理、原地转弯转不准问题、以及履带式车辆行动规划时所要遵循的规则等重要结论,针对履带式移动机器人的纵向运动控制问题,讨论了其速度控制模型,提出了一种速度测量与控制的简单、准确、可靠的方法。在磺向运动方面,提出了一种基于FM-LIKE和AM-LIKE相结合的复合控制技术,解决了难度较大的方向控制问题。最后提供了实验结果,证明了上述方法与结论的正确性。上述方法与结论,作为863-512某型号任务的一部分,业已通过验收。
Resumo:
为工业机器人提出了一种最优学习控制法。这种控制法用加速度误差校正驱动器运动。并提出了一种基于几何级数的极限条件估计学习控制过程收敛条件的理论方法。所提出学习控制法的有效性通过PUMA562机器人的计算机仿真结果得到了证实。
Resumo:
本文为动力学控制工业机器人机械手提出一种综合控制算法。该控制算法,利用小脑模型算术计算机模块模拟机器人机械手的动力学方程并计算实现期望运动所需力矩作为前馈力矩控制项;利用自适应控制器实现反馈控制,以消除由输入扰动和参数变化而引起的机器人机械手运动误差。这种控制方法在时间上是有效的,且很适合于定点实现。控制方法的有效性通过四自由度的直接驱动机器人前两个关节的计算机仿真实验得到验证。
Resumo:
加速度传感器装在机械手手部,各关节的加速度由加速度分解算法得到.然后,提出了一种学习控制法,这种控制法利用加速度误差校正驱动器运动.并提出了一种基于几何级数的极限条件估计学习控制过程收敛条件的理论方法.本文所提出的学习控制理论的有效性通过 PUMA-562 机器人的计算机仿真实验得到了证实.
Resumo:
本文提出了步行机器人运动控制算法。该方法以相对运动学原理为基础,把机体的运动规划问题转化为腿的足端轨迹规划问题,从而使步行机器人运动控制问题得到大大简化.并应用该方法对全方位三角步态算法及稳定性进行分析求解.
Resumo:
本文提出了一种新的、有效的机器人自适应控制方式,克服了其他方法由于模型不准或计算量大等所带来的一系列问题。本文首先将 Lagrange 运动方程转化为 ARMA 模型,并用虚拟噪声补偿模型误差(即由于线性化、解耦、观测不准和干扰等误差).然后利用改进的 Kalman 自适应滤波算法在线进行参数辨识和状态估计,将获得的参数用于机器人控制系统自适应控制器的设计.最后给出了该算法的仿真结果并对此进行了讨论。
Resumo:
Most animals have significant behavioral expertise built in without having to explicitly learn it all from scratch. This expertise is a product of evolution of the organism; it can be viewed as a very long term form of learning which provides a structured system within which individuals might learn more specialized skills or abilities. This paper suggests one possible mechanism for analagous robot evolution by describing a carefully designed series of networks, each one being a strict augmentation of the previous one, which control a six legged walking machine capable of walking over rough terrain and following a person passively sensed in the infrared spectrum. As the completely decentralized networks are augmented, the robot's performance and behavior repertoire demonstrably improve. The rationale for such demonstrations is that they may provide a hint as to the requirements for automatically building massive networks to carry out complex sensory-motor tasks. The experiments with an actual robot ensure that an essence of reality is maintained and that no critical problems have been ignored.
Resumo:
We present psychophysical experiments that measure the accuracy of perceived 3D structure derived from relative image motion. The experiments are motivated by Ullman's incremental rigidity scheme, which builds up 3D structure incrementally over an extended time. Our main conclusions are: first, the human system derives an accurate model of the relative depths of moving points, even in the presence of noise; second, the accuracy of 3D structure improves with time, eventually reaching a plateau; and third, the 3D structure currently perceived depends on previous 3D models. Through computer simulations, we relate the psychophysical observations to the behavior of Ullman's model.
Resumo:
The 1989 AI Lab Winter Olympics will take a slightly different twist from previous Olympiads. Although there will still be a dozen or so athletic competitions, the annual talent show finale will now be a display not of human talent, but of robot talent. Spurred on by the question, "Why aren't there more robots running around the AI Lab?", Olympic Robot Building is an attempt to teach everyone how to build a robot and get them started. Robot kits will be given out the last week of classes before the Christmas break and teams have until the Robot Talent Show, January 27th, to build a machine that intelligently connects perception to action. There is no constraint on what can be built; participants are free to pick their own problems and solution implementations. As Olympic Robot Building is purposefully a talent show, there is no particular obstacle course to be traversed or specific feat to be demonstrated. The hope is that this format will promote creativity, freedom and imagination. This manual provides a guide to overcoming all the practical problems in building things. What follows are tutorials on the components supplied in the kits: a microprocessor circuit "brain", a variety of sensors and motors, a mechanical building block system, a complete software development environment, some example robots and a few tips on debugging and prototyping. Parts given out in the kits can be used, ignored or supplemented, as the kits are designed primarily to overcome the intertia of getting started. If all goes well, then come February, there should be all kinds of new members running around the AI Lab!
Resumo:
We address the computational role that the construction of a complete surface representation may play in the recovery of 3--D structure from motion. We present a model that combines a feature--based structure--from- -motion algorithm with smooth surface interpolation. This model can represent multiple surfaces in a given viewing direction, incorporates surface constraints from object boundaries, and groups image features using their 2--D image motion. Computer simulations relate the model's behavior to perceptual observations. In a companion paper, we discuss further perceptual experiments regarding the role of surface reconstruction in the human recovery of 3--D structure from motion.
Resumo:
Earlier, we introduced a direct method called fixation for the recovery of shape and motion in the general case. The method uses neither feature correspondence nor optical flow. Instead, it directly employs the spatiotemporal gradients of image brightness. This work reports the experimental results of applying some of our fixation algorithms to a sequence of real images where the motion is a combination of translation and rotation. These results show that parameters such as the fization patch size have crucial effects on the estimation of some motion parameters. Some of the critical issues involved in the implementaion of our autonomous motion vision system are also discussed here. Among those are the criteria for automatic choice of an optimum size for the fixation patch, and an appropriate location for the fixation point which result in good estimates for important motion parameters. Finally, a calibration method is described for identifying the real location of the rotation axis in imaging systems.
Resumo:
A typical robot vision scenario might involve a vehicle moving with an unknown 3D motion (translation and rotation) while taking intensity images of an arbitrary environment. This paper describes the theory and implementation issues of tracking any desired point in the environment. This method is performed completely in software without any need to mechanically move the camera relative to the vehicle. This tracking technique is simple an inexpensive. Furthermore, it does not use either optical flow or feature correspondence. Instead, the spatio-temporal gradients of the input intensity images are used directly. The experimental results presented support the idea of tracking in software. The final result is a sequence of tracked images where the desired point is kept stationary in the images independent of the nature of the relative motion. Finally, the quality of these tracked images are examined using spatio-temporal gradient maps.
Resumo:
A key question regarding primate visual motion perception is whether the motion of 2D patterns is recovered by tracking distinctive localizable features [Lorenceau and Gorea, 1989; Rubin and Hochstein, 1992] or by integrating ambiguous local motion estimates [Adelson and Movshon, 1982; Wilson and Kim, 1992]. For a two-grating plaid pattern, this translates to either tracking the grating intersections or to appropriately combining the motion estimates for each grating. Since both component and feature information are simultaneously available in any plaid pattern made of contrast defined gratings, it is unclear how to determine which of the two schemes is actually used to recover the plaid"s motion. To address this problem, we have designed a plaid pattern made with subjective, rather than contrast defined, gratings. The distinguishing characteristic of such a plaid pattern is that it contains no contrast defined intersections that may be tracked. We find that notwithstanding the absence of such features, observers can accurately recover the pattern velocity. Additionally we show that the hypothesis of tracking "illusory features" to estimate pattern motion does not stand up to experimental test. These results present direct evidence in support of the idea that calls for the integration of component motions over the one that mandates tracking localized features to recover 2D pattern motion. The localized features, we suggest, are used primarily as providers of grouping information - which component motion signals to integrate and which not to.