921 resultados para MACHINES
Resumo:
机器人化是提高工程机械施工控制自动化的关键问题。对于工程机械机器人 ,特别是具有空间冗余度的工程机械机器人 ,其执行机构末端的轨迹规划的自动实现研究对提高其自动化程度有着重要意义。本文以泵车为例 ,提出了一种泵车布料机构的浇筑过程的自动轨迹规划算法 ,该算法通过将其所浇筑区域离散成浇筑点集 ,对两浇筑点的轨迹利用冗余度机器人学的最小关节范数法 ,并通过在关节速度和加速度非连续之处采用平滑或连续处理 ,从而实现了泵车布料机构浇筑过程的自动轨迹规划。
Resumo:
以5坐标并联机床为依托,面向不同构型并联机床,采用“PMAC+IPC”双CPU为硬件平台、VisualC++6.0为软件平台,开发了基于Windows操作平台的开放式并联机床数控系统。本文介绍了本数控系统的硬件结构、软件构成,并对数控软件开发的关键技术进行了阐述。
Resumo:
许多工程机械的工作装置属于冗余度机器人的范畴 ,该类机械施工控制自动化所涉及到的轨迹规划问题 ,一直受到人们的重视·采用冗余度机器人运动学的最小关节力矩法 ,对空间4R冗余度机器人的关节轨迹的运动规划进行了分析仿真·实验表明该方法实际可行 ,运动学和动力学的工作特性良好·
Resumo:
机器人化是提高工程机械施工控制自动化的关键问题·通过将泵车布料机构的浇筑区域离散为浇筑点集 ,对两浇筑点内轨迹利用冗余度机器人学的最小关节范数法的轨迹规划 ,并将两相临浇筑段自动连接和自动协调 ,从而实现了混凝土泵车布料机构浇筑过程自动轨迹规划·
Resumo:
制造单元的划分是实施单元化生产的关键途径。本文提出了一种基于工件制造工艺的并考虑机床负荷的单元划分算法。首先根据工件的加工特征及机床负荷定义了同类实体之间相似性。然后根据启发式规则选择工件族及制造单元的种子元素,并以聚类块内的离散程度作为评价标准,对系统内的工件和机床进行了聚类。
Resumo:
通常,在一台带有托板输送系统的柔性制造设备(如柔性制造单元)上一次在托板上只装夹一个零件进行加工。“虚工件”法是指在一个托板上装多个相同或不同的零件,将它们视为一个零件投入设备中,用对各零件的NC 程序进行相关处理后得到的 NC 新程序进行加工。应用这种方法可以提高生产率,提高柔性并有利于均衡生产。分析了应用“虚工件”法的一些限制条件及其技术实施中的苦干技术关键.
Resumo:
本文在建立机器故障和系统阻塞的近似模型的基础上给出了系统的排队网络模型.利用这一模型,可以对问题解析地求解,以分析系统的性能,而不需要复杂的计算.仿真结果表明,其解的精度令人满意.
Resumo:
本文初步探讨了遥控机器人监控的基本概念,得出这样一种认识:监控方式是机器人向智能化发展的一个恰当模式,监控系统是由人的高级智能与机器人的低级智能构成的系统.监控是系统中这两种智能相互作用的过程,人的智能应当能够在机器智能的不同级别上输入,我们研制了一个遥控机器人监控操作器的实验系统,通过实例和实验作了说明.
Resumo:
提出一种基于支持向量机理论的车型分类器的设计方案。通过对实际车辆的图像采集、处理和分析,获取所需样本数据。采用有导师训练方法训练三个支持向量机识别器,使用测试样本对训练出的识别器进行性能测试。最后将三个识别器与表决器结合得到车型分类器。
Resumo:
The amount of computation required to solve many early vision problems is prodigious, and so it has long been thought that systems that operate in a reasonable amount of time will only become feasible when parallel systems become available. Such systems now exist in digital form, but most are large and expensive. These machines constitute an invaluable test-bed for the development of new algorithms, but they can probably not be scaled down rapidly in both physical size and cost, despite continued advances in semiconductor technology and machine architecture. Simple analog networks can perform interesting computations, as has been known for a long time. We have reached the point where it is feasible to experiment with implementation of these ideas in VLSI form, particularly if we focus on networks composed of locally interconnected passive elements, linear amplifiers, and simple nonlinear components. While there have been excursions into the development of ideas in this area since the very beginnings of work on machine vision, much work remains to be done. Progress will depend on careful attention to matching of the capabilities of simple networks to the needs of early vision. Note that this is not at all intended to be anything like a review of the field, but merely a collection of some ideas that seem to be interesting.
Resumo:
This thesis examines the problem of an autonomous agent learning a causal world model of its environment. Previous approaches to learning causal world models have concentrated on environments that are too "easy" (deterministic finite state machines) or too "hard" (containing much hidden state). We describe a new domain --- environments with manifest causal structure --- for learning. In such environments the agent has an abundance of perceptions of its environment. Specifically, it perceives almost all the relevant information it needs to understand the environment. Many environments of interest have manifest causal structure and we show that an agent can learn the manifest aspects of these environments quickly using straightforward learning techniques. We present a new algorithm to learn a rule-based causal world model from observations in the environment. The learning algorithm includes (1) a low level rule-learning algorithm that converges on a good set of specific rules, (2) a concept learning algorithm that learns concepts by finding completely correlated perceptions, and (3) an algorithm that learns general rules. In addition this thesis examines the problem of finding a good expert from a sequence of experts. Each expert has an "error rate"; we wish to find an expert with a low error rate. However, each expert's error rate and the distribution of error rates are unknown. A new expert-finding algorithm is presented and an upper bound on the expected error rate of the expert is derived.
Resumo:
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.
Resumo:
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
Resumo:
The research here described centers on how a machine can recognize concepts and learn concepts to be recognized. Explanations are found in computer programs that build and manipulate abstract descriptions of scenes such as those children construct from toy blocks. One program uses sample scenes to create models of simple configurations like the three-brick arch. Another uses the resulting models in making identifications. Throughout emphasis is given to the importance of using good descriptions when exploring how machines can come to perceive and understand the visual environment.
Resumo:
A cellular automaton is an iterative array of very simple identical information processing machines called cells. Each cell can communicate with neighboring cells. At discrete moments of time the cells can change from one state to another as a function of the states of the cell and its neighbors. Thus on a global basis, the collection of cells is characterized by some type of behavior. The goal of this investigation was to determine just how simple the individual cells could be while the global behavior achieved some specified criterion of complexity ??ually the ability to perform a computation or to reproduce some pattern. The chief result described in this thesis is that an array of identical square cells (in two dimensions), each cell of which communicates directly with only its four nearest edge neighbors and each of which can exist in only two states, can perform any computation. This computation proceeds in a straight forward way. A configuration is a specification of the states of all the cells in some area of the iterative array. Another result described in this thesis is the existence of a self-reproducing configuration in an array of four-state cells, a reduction of four states from the previously known eight-state case. The technique of information processing in cellular arrays involves the synthesis of some basic components. Then the desired behaviors are obtained by the interconnection of these components. A chapter on components describes some sets of basic components. Possible applications of the results of this investigation, descriptions of some interesting phenomena (for vanishingly small cells), and suggestions for further study are given later.