951 resultados para VENDING MACHINES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文在建立机器故障和系统阻塞的近似模型的基础上给出了系统的排队网络模型.利用这一模型,可以对问题解析地求解,以分析系统的性能,而不需要复杂的计算.仿真结果表明,其解的精度令人满意.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文初步探讨了遥控机器人监控的基本概念,得出这样一种认识:监控方式是机器人向智能化发展的一个恰当模式,监控系统是由人的高级智能与机器人的低级智能构成的系统.监控是系统中这两种智能相互作用的过程,人的智能应当能够在机器智能的不同级别上输入,我们研制了一个遥控机器人监控操作器的实验系统,通过实例和实验作了说明.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

提出一种基于支持向量机理论的车型分类器的设计方案。通过对实际车辆的图像采集、处理和分析,获取所需样本数据。采用有导师训练方法训练三个支持向量机识别器,使用测试样本对训练出的识别器进行性能测试。最后将三个识别器与表决器结合得到车型分类器。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The amount of computation required to solve many early vision problems is prodigious, and so it has long been thought that systems that operate in a reasonable amount of time will only become feasible when parallel systems become available. Such systems now exist in digital form, but most are large and expensive. These machines constitute an invaluable test-bed for the development of new algorithms, but they can probably not be scaled down rapidly in both physical size and cost, despite continued advances in semiconductor technology and machine architecture. Simple analog networks can perform interesting computations, as has been known for a long time. We have reached the point where it is feasible to experiment with implementation of these ideas in VLSI form, particularly if we focus on networks composed of locally interconnected passive elements, linear amplifiers, and simple nonlinear components. While there have been excursions into the development of ideas in this area since the very beginnings of work on machine vision, much work remains to be done. Progress will depend on careful attention to matching of the capabilities of simple networks to the needs of early vision. Note that this is not at all intended to be anything like a review of the field, but merely a collection of some ideas that seem to be interesting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines the problem of an autonomous agent learning a causal world model of its environment. Previous approaches to learning causal world models have concentrated on environments that are too "easy" (deterministic finite state machines) or too "hard" (containing much hidden state). We describe a new domain --- environments with manifest causal structure --- for learning. In such environments the agent has an abundance of perceptions of its environment. Specifically, it perceives almost all the relevant information it needs to understand the environment. Many environments of interest have manifest causal structure and we show that an agent can learn the manifest aspects of these environments quickly using straightforward learning techniques. We present a new algorithm to learn a rule-based causal world model from observations in the environment. The learning algorithm includes (1) a low level rule-learning algorithm that converges on a good set of specific rules, (2) a concept learning algorithm that learns concepts by finding completely correlated perceptions, and (3) an algorithm that learns general rules. In addition this thesis examines the problem of finding a good expert from a sequence of experts. Each expert has an "error rate"; we wish to find an expert with a low error rate. However, each expert's error rate and the distribution of error rates are unknown. A new expert-finding algorithm is presented and an upper bound on the expected error rate of the expert is derived.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research here described centers on how a machine can recognize concepts and learn concepts to be recognized. Explanations are found in computer programs that build and manipulate abstract descriptions of scenes such as those children construct from toy blocks. One program uses sample scenes to create models of simple configurations like the three-brick arch. Another uses the resulting models in making identifications. Throughout emphasis is given to the importance of using good descriptions when exploring how machines can come to perceive and understand the visual environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A cellular automaton is an iterative array of very simple identical information processing machines called cells. Each cell can communicate with neighboring cells. At discrete moments of time the cells can change from one state to another as a function of the states of the cell and its neighbors. Thus on a global basis, the collection of cells is characterized by some type of behavior. The goal of this investigation was to determine just how simple the individual cells could be while the global behavior achieved some specified criterion of complexity ??ually the ability to perform a computation or to reproduce some pattern. The chief result described in this thesis is that an array of identical square cells (in two dimensions), each cell of which communicates directly with only its four nearest edge neighbors and each of which can exist in only two states, can perform any computation. This computation proceeds in a straight forward way. A configuration is a specification of the states of all the cells in some area of the iterative array. Another result described in this thesis is the existence of a self-reproducing configuration in an array of four-state cells, a reduction of four states from the previously known eight-state case. The technique of information processing in cellular arrays involves the synthesis of some basic components. Then the desired behaviors are obtained by the interconnection of these components. A chapter on components describes some sets of basic components. Possible applications of the results of this investigation, descriptions of some interesting phenomena (for vanishingly small cells), and suggestions for further study are given later.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report investigates some techinques appropriate to representing the knowledge necessary for understanding a class of electronic machines -- radio receivers. A computational performance model - WATSON - is presented. WATSONs task is to isolate failures in radio receivers whose principles of operation have been appropriately described in his knowledge base. The thesis of the report is that hierarchically organized representational structures are essential to the understanding of complex mechanisms. Such structures lead not only to descriptions of machine operation at many levels of detail, but also offer a powerful means of organizing "specialist" knowledge for the repair of machines when they are broken.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"The Structure and Interpretation of Computer Programs" is the entry-level subject in Computer Science at the Massachusetts Institute of Technology. It is required of all students at MIT who major in Electrical Engineering or in Computer Science, as one fourth of the "common core curriculum," which also includes two subjects on circuits and linear systems and a subject on the design of digital systems. We have been involved in the development of this subject since 1978, and we have taught this material in its present form since the fall of 1980 to approximately 600 students each year. Most of these students have had little or no prior formal training in computation, although most have played with computers a bit and a few have had extensive programming or hardware design experience. Our design of this introductory Computer Science subject reflects two major concerns. First we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute. Secondly, we believe that the essential material to be addressed by a subject at this level, is not the syntax of particular programming language constructs, nor clever algorithms for computing particular functions of efficiently, not even the mathematical analysis of algorithms and the foundations of computing, but rather the techniques used to control the intellectual complexity of large software systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lacey, N. J., Lee, M. H. (2003). The Epistemological Foundations of Artificial Agents. Minds and Machines, 13, 339-365.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lee, M. H., Lacey, N. J. (2003). The Influence of Epistemology on the Design of Artificial Agents. Minds and Machines, 13 (3), 367-395

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wydział Nauk Geograficznych i Geologicznych: Zakład Centrum Turystyki i Rekreacji

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wydział Historyczny