940 resultados para DRAGON (Computer system)
Resumo:
During the development of our ESESOC system (Expert System for the Elucidation of the Structures of Organic Compounds), computer perception of topological symmetry is essential in searching for the canonical description of a molecular structure, removing the irredundant connections in the structure generation process, and specifying the number of peaks in C-13- and H-1-NMR spectra in the structure evaluation process. In the present paper, a new path identifier is introduced and an algorithm for detection of topological symmetry from a connection table is developed by the all-paths method. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
For the exhaustive and irredundant generation of candidate structures in ESESOC (Expert System for the Elucidation of the Structures of Organic Compounds), a new algorithm for computer perception of topological equivalence classes of the nodes (non-hydrog
Resumo:
An expert system for solvent extraction of rare earths has been developed using LISP. The goal of this project was to mimic the chemists' inferential abilities to assist in the process of solvent extraction of rare earths. The system includes frequently used extractants, separation of specific rare earths, recommendation of procedures for the separation of mixtures of rare earths using (2-ethylhexyl)phosphonic acid 2-ethylhexyl monoester, selection of parameters for counter-current extraction and methods for evaluation of the technique, and the economics of the processing. The expert system runs on an IBM-PC/XT.
Resumo:
The CIAC (Changchun Institute of Applied Chemistry) Comprehensive information System of Rare Earths is composed of three subsystems, namely, extraction data, physicochemical properties, and reference data. This paper describes the databases pertaining to the extraction of rare earths and their physicochemical properties and discusses the relationships between data retrieval and optimization and between the structures of the extractants and the efficiency with which they are extracted. Expert systems for rare earth extraction and calculation of thermodynamic parameters are described, and an application of pattern recognition to the problems of classification of compounds of the rare earths and prediction of their properties is reported.
Resumo:
Heart disease is one of the main factor causing death in the developed countries. Over several decades, variety of electronic and computer technology have been developed to assist clinical practices for cardiac performance monitoring and heart disease diagnosis. Among these methods, Ballistocardiography (BCG) has an interesting feature that no electrodes are needed to be attached to the body during the measurement. Thus, it is provides a potential application to asses the patients heart condition in the home. In this paper, a comparison is made for two neural networks based BCG signal classification models. One system uses a principal component analysis (PCA) method, and the other a discrete wavelet transform, to reduce the input dimensionality. It is indicated that the combined wavelet transform and neural network has a more reliable performance than the combined PCA and neural network system. Moreover, the wavelet transform requires no prior knowledge of the statistical distribution of data samples and the computation complexity and training time are reduced.
Resumo:
A data manipulation method has been developed for automatic peak recognition and result evaluation in the analysis of organic chlorinated hydrocarbons with dual-column gas chromatography. Based on the retention times of two internal standards, pentachlorotoluene and decachlorobiphenyl, the retention times of chlorinated hydrocarbons can be calibrated automatically and accurately. It is very convenient to identify the peaks by comparing the retention times of samples with the calibrated retention times calculated from the relative retention indices of standards. Meanwhile, with a suggested two-step evaluation method the evaluation coefficients and the suitable quantitative results of each component can be automatically achieved for practical samples in an analytical system using two columns with different polarities and two internal standards. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Evolutionary algorithms are a common tool in engineering and in the study of natural evolution. Here we take their use in a new direction by showing how they can be made to implement a universal computer. We consider populations of individuals with genes whose values are the variables of interest. By allowing them to interact with one another in a specified environment with limited resources, we demonstrate the ability to construct any arbitrary logic circuit. We explore models based on the limits of small and large populations, and show examples of such a system in action, implementing a simple logic circuit.
Resumo:
While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a context-based vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., table, chair, car, computer). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides real-time feedback to the user.
Resumo:
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.
Resumo:
Control of machines that exhibit flexibility becomes important when designers attempt to push the state of the art with faster, lighter machines. Three steps are necessary for the control of a flexible planet. First, a good model of the plant must exist. Second, a good controller must be designed. Third, inputs to the controller must be constructed using knowledge of the system dynamic response. There is a great deal of literature pertaining to modeling and control but little dealing with the shaping of system inputs. Chapter 2 examines two input shaping techniques based on frequency domain analysis. The first involves the use of the first deriviate of a gaussian exponential as a driving function template. The second, acasual filtering, involves removal of energy from the driving functions at the resonant frequencies of the system. Chapter 3 presents a linear programming technique for generating vibration-reducing driving functions for systems. Chapter 4 extends the results of the previous chapter by developing a direct solution to the new class of driving functions. A detailed analysis of the new technique is presented from five different perspectives and several extensions are presented. Chapter 5 verifies the theories of the previous two chapters with hardware experiments. Because the new technique resembles common signal filtering, chapter 6 compares the new approach to eleven standard filters. The new technique will be shown to result in less residual vibrations, have better robustness to system parameter uncertainty, and require less computation than other currently used shaping techniques.
Resumo:
A fundamental problem in artificial intelligence is obtaining coherent behavior in rule-based problem solving systems. A good quantitative measure of coherence is time behavior; a system that never, in retrospect, applied a rule needlessly is certainly coherent; a system suffering from combinatorial blowup is certainly behaving incoherently. This report describes a rule-based problem solving system for automatically writing and improving numerical computer programs from specifications. The specifications are in terms of "constraints" among inputs and outputs. The system has solved program synthesis problems involving systems of equations, determining that methods of successive approximation converge, transforming recursion to iteration, and manipulating power series (using differing organizations, control structures, and argument-passing techniques).
Resumo:
This report describes a knowledge-base system in which the information is stored in a network of small parallel processing elements ??de and link units ??ich are controlled by an external serial computer. This network is similar to the semantic network system of Quillian, but is much more tightly controlled. Such a network can perform certain critical deductions and searches very quickly; it avoids many of the problems of current systems, which must use complex heuristics to limit and guided their searches. It is argued (with examples) that the key operation in a knowledge-base system is the intersection of large explicit and semi-explicit sets. The parallel network system does this in a small, essentially constant number of cycles; a serial machine takes time proportional to the size of the sets, except in special cases.
Resumo:
PILOT is a programming system constructed in LISP. It is designed to facilitate the development of programs by easing the familiar sequence: write some code, run the program, make some changes, write some more code, run the program again, etc. As a program becomes more complex, making these changes becomes harder and harder because the implications of changes are harder to anticipate. In the PILOT system, the computer plays an active role in this evolutionary process by providing the means whereby changes can be effected immediately, and in ways that seem natural to the user. The user of PILOT feels that he is giving advice, or making suggestions, to the computer about the operation of his programs, and that the system then performs the work necessary. The PILOT system is thus an interface between the user and his program, monitoring both in the requests of the user and operation of his program. The user may easily modify the PILOT system itself by giving it advice about its own operation. This allows him to develop his own language and to shift gradually onto PILOT the burden of performing routine but increasingly complicated tasks. In this way, he can concentrate on the conceptual difficulties in the original problem, rather than on the niggling tasks of editing, rewriting, or adding to his programs. Two detailed examples are presented. PILOT is a first step toward computer systems that will help man to formulate problems in the same way they now help him to solve them. Experience with it supports the claim that such "symbiotic systems" allow the programmer to attack and solve more difficult problems.
Resumo:
This paper describes BUILD, a computer program which generates plans for building specified structures out of simple objects such as toy blocks. A powerful heuristic control structure enables BUILD to use a number of sophisticated construction techniques in its plans. Among these are the incorporation of pre-existing structure into the final design, pre-assembly of movable sub-structures on the table, and use of the extra blocks as temporary supports and counterweights in the course of construction. BUILD does its planning in a modeled 3-space in which blocks of various shapes and sizes can be represented in any orientation and location. The modeling system can maintain several world models at once, and contains modules for displaying states, testing them for inter-object contact and collision, and for checking the stability of complex structures involving frictional forces. Various alternative approaches are discussed, and suggestions are included for the extension of BUILD-like systems to other domains. Also discussed are the merits of BUILD's implementation language, CONNIVER, for this type of problem solving.
Resumo:
This thesis describes a mechanical assembly system called LAMA (Language for Automatic Mechanical Assembly). The goal of the work was to create a mechanical assembly system that transforms a high-level description of an automatic assembly operation into a program or execution by a computer controlled manipulator. This system allows the initial description of the assembly to be in terms of the desired effects on the parts being assembled. Languages such as WAVE [Bolles & Paul] and MINI [Silver] fail to meet this goal by requiring the assembly operation to be described in terms of manipulator motions. This research concentrates on the spatial complexity of mechanical assembly operations. The assembly problem is seen as the problem of achieving a certain set of geometrical constraints between basic objects while avoiding unwanted collisions. The thesis explores how these two facets, desired constraints and unwanted collisions, affect the primitive operations of the domain.