997 resultados para Aisberg-2004-10
Resumo:
This report describes research about flow graphs - labeled, directed, acyclic graphs which abstract representations used in a variety of Artificial Intelligence applications. Flow graphs may be derived from flow grammars much as strings may be derived from string grammars; this derivation process forms a useful model for the stepwise refinement processes used in programming and other engineering domains. The central result of this report is a parsing algorithm for flow graphs. Given a flow grammar and a flow graph, the algorithm determines whether the grammar generates the graph and, if so, finds all possible derivations for it. The author has implemented the algorithm in LISP. The intent of this report is to make flow-graph parsing available as an analytic tool for researchers in Artificial Intelligence. The report explores the intuitions behind the parsing algorithm, contains numerous, extensive examples of its behavior, and provides some guidance for those who wish to customize the algorithm to their own uses.
Resumo:
We wish to design a diagnostic for a device from knowledge of its structure and function. the diagnostic should achieve both coverage of the faults that can occur in the device, and should strive to achieve specificity in its diagnosis when it detects a fault. A system is described that uses a simple model of hardware structure and function, representing the device in terms of its internal primitive functions and connections. The system designs a diagnostic in three steps. First, an extension of path sensitization is used to design a test for each of the connections in teh device. Next, the resulting tests are improved by increasing their specificity. Finally the tests are ordered so that each relies on the fewest possible connections. We describe an implementation of this system and show examples of the results for some simple devices.
Resumo:
Geologic interpretation is the task of inferring a sequence of events to explain how a given geologic region could have been formed. This report describes the design and implementation of one part of a geologic interpretation problem solver -- a system which uses a simulation technique called imagining to check the validity of a candidate sequence of events. Imagining uses a combination of qualitative and quantitative simulations to reason about the changes which occured to the geologic region. The spatial changes which occur are simulated by constructing a sequence of diagrams. The quantitative simulation needs numeric parameters which are determined by using the qualitative simulation to establish the cumulative changes to an object and by using a description of the current geologic region to make quantitative measurements. The diversity of reasoning skills used in imagining has necessitated the development of multiple representations, each specialized for a different task. Representations to facilitate doing temporal, spatial and numeric reasoning are described in detail. We have also found it useful to explicitly represent processes. Both the qualitative and quantitative simulations use a discrete 'layer cake' model of geologic processes, but each uses a separate representation, specialized to support the type of simulation. These multiple representations have enabled us to develop a powerful, yet modular, system for reasoning about change.
Resumo:
The visual analysis of surface shape from texture and surface contour is treated within a computational framework. The aim of this study is to determine valid constraints that are sufficient to allow surface orientation and distance (up to a multiplicative constant) to be computed from the image of surface texture and of surface contours.
Resumo:
We present the results of an implemented system for learning structural prototypes from grey-scale images. We show how to divide an object into subparts and how to encode the properties of these subparts and the relations between them. We discuss the importance of hierarchy and grouping in representing objects and show how a notion of visual similarities can be embedded in the description language. Finally we exhibit a learning algorithm that forms class models from the descriptions produced and uses these models to recognize new members of the class.
Resumo:
Redundant sensors are needed on a mobile robot so that the accuracy with which it perceives its surroundings can be increased. Sonar and infrared sensors are used here in tandem, each compensating for deficiencies in the other. The robot combines the data from both sensors to build a representation which is more accurate than if either sensor were used alone. Another representation, the curvature primal sketch, is extracted from this perceived workspace and is used as the input to two path planning programs: one based on configuration space and one based on a generalized cone formulation of free space.
Resumo:
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
Resumo:
With the push towards sub-micron technology, transistor models have become increasingly complex. The number of components in integrated circuits has forced designer's efforts and skills towards higher levels of design. This has created a gap between design expertise and the performance demands increasingly imposed by the technology. To alleviate this problem, software tools must be developed that provide the designer with expert advice on circuit performance and design. This requires a theory that links the intuitions of an expert circuit analyst with the corresponding principles of formal theory (i.e. algebra, calculus, feedback analysis, network theory, and electrodynamics), and that makes each underlying assumption explicit.
Resumo:
Objects move, collide, flow, bend, heat up, cool down, stretch, compress and boil. These and other things that cause changes in objects over time are intuitively characterized as processes. To understand common sense physical reasoning and make programs that interact with the physical world as well as people do we must understand qualitative reasoning about processes, when they will occur, their effects, and when they will stop. Qualitative Process theory defines a simple notion of physical process that appears useful as a language in which to write dynamical theories. Reasoning about processes also motivates a new qualitative representation for quantity in terms of inequalities, called quantity space. This report describes the basic concepts of Qualitative Process theory, several different kinds of reasoning that can be performed with them, and discusses its impact on other issues in common sense reasoning about the physical world, such as causal reasoning and measurement interpretation. Several extended examples illustrate the utility of the theory, including figuring out that a boiler can blow up, that an oscillator with friction will eventually stop, and how to say that you can pull with a string but not push with it. This report also describes GIZMO, an implemented computer program which uses Qualitative Process theory to make predictions and interpret simple measurements. The represnetations and algorithms used in GIZMO are described in detail, and illustrated using several examples.
Resumo:
This report describes a computer system that creates simple computer animation in response to high-level, vague, and incomplete descriptions of films. It makes its films by collecting and evaluating suggestions from several different bodies of knowledge. The order in which it makes its choices is influenced by the focus of the film. Difficult choices are postponed to be resumed when more of the film has been determined. The system was implemented in an object-oriented language based upon computational entities called "actors". The goal behind the construction of the system is that, whenever faced with a choice, it should sensibly choose between alternatives based upon the description of the film and as much general knowledge as possible. The system is presented as a computational model of creativity and aesthetics.
Resumo:
This report describes a domain independent reasoning system. The system uses a frame-based knowledge representation language and various reasoning techniques including constraint propagation, progressive refinement, natural deduction and explicit control of reasoning. A computational architecture based on active objects which operate by exchanging messages is developed and it is shown how this architecture supports reasoning activity. The user interacts with the system by specifying frames and by giving descriptions defining the problem situation. The system uses its reasoning capacity to build up a model of the problem situation from which a solution can interactively be extracted. Examples are discussed from a variety of domains, including electronic circuits, mechanical devices and music. The main thesis is that a reasoning system is best viewed as a parallel system whose control and data are distributed over a large network of processors that interact by exchanging messages. Such a system will be metaphorically described as a society of communicating experts.
Resumo:
This thesis describes a new representation for two-dimensional round regions called Local Rotational Symmetries. Local Rotational Symmetries are intended as a companion to Brady's Smoothed Local Symmetry Representation for elongated shapes. An algorithm for computing Local Rotational Symmetry representations at multiple scales of resolution has been implemented and results of this implementation are presented. These results suggest that Local Rotational Symmetries provide a more robustly computable and perceptually accurate description of round regions than previous proposed representations. In the course of developing this representation, it has been necessary to modify the way both Smoothed Local Symmetries and Local Rotational Symmetries are computed. First, grey-scale image smoothing proves to be better than boundary smoothing for creating representations at multiple scales of resolution, because it is more robust and it allows qualitative changes in representations between scales. Secondly, it is proposed that shape representations at different scales of resolution be explicitly related, so that information can be passed between scales and computation at each scale can be kept local. Such a model for multi-scale computation is desirable both to allow efficient computation and to accurately model human perceptions.
Resumo:
How much information about the shape of an object can be inferred from its image? In particular, can the shape of an object be reconstructed by measuring the light it reflects from points on its surface? These questions were raised by Horn [HO70] who formulated a set of conditions such that the image formation can be described in terms of a first order partial differential equation, the image irradiance equation. In general, an image irradiance equation has infinitely many solutions. Thus constraints necessary to find a unique solution need to be identified. First we study the continuous image irradiance equation. It is demonstrated when and how the knowledge of the position of edges on a surface can be used to reconstruct the surface. Furthermore we show how much about the shape of a surface can be deduced from so called singular points. At these points the surface orientation is uniquely determined by the measured brightness. Then we investigate images in which certain types of silhouettes, which we call b-silhouettes, can be detected. In particular we answer the following question in the affirmative: Is there a set of constraints which assure that if an image irradiance equation has a solution, it is unique? To this end we postulate three constraints upon the image irradiance equation and prove that they are sufficient to uniquely reconstruct the surface from its image. Furthermore it is shown that any two of these constraints are insufficient to assure a unique solution to an image irradiance equation. Examples are given which illustrate the different issues. Finally, an overview of known numerical methods for computing solutions to an image irradiance equation are presented.
Resumo:
Handwriting production is viewed as a constrained modulation of an underlying oscillatory process. Coupled oscillations in horizontal and vertical directions produce letter forms, and when superimposed on a rightward constant velocity horizontal sweep result in spatially separated letters. Modulation of the vertical oscillation is responsible for control of letter height, either through altering the frequency or altering the acceleration amplitude. Modulation of the horizontal oscillation is responsible for control of corner shape through altering phase or amplitude. The vertical velocity zero crossing in the velocity space diagram is important from the standpoint of control. Changing the horizontal velocity value at this zero crossing controls corner shape, and such changes can be effected through modifying the horizontal oscillation amplitude and phase. Changing the slope at this zero crossing controls writing slant; this slope depends on the horizontal and vertical velocity zero amplitudes and on the relative phase difference. Letter height modulation is also best applied at the vertical velocity zero crossing to preserve an even baseline. The corner shape and slant constraints completely determine the amplitude and phase relations between the two oscillations. Under these constraints interletter separation is not an independent parameter. This theory applies generally to a number of acceleration oscillation patterns such as sinusoidal, rectangular and trapezoidal oscillations. The oscillation theory also provides an explanation for how handwriting might degenerate with speed. An implementation of the theory in the context of the spring muscle model is developed. Here sinusoidal oscillations arise from a purely mechanical sources; orthogonal antagonistic spring pairs generate particular cycloids depending on the initial conditions. Modulating between cycloids can be achieved by changing the spring zero settings at the appropriate times. Frequency can be modulated either by shifting between coactivation and alternating activation of the antagonistic springs or by presuming variable spring constant springs. An acceleration and position measuring apparatus was developed for measurements of human handwriting. Measurements of human writing are consistent with the oscillation theory. It is shown that the minimum energy movement for the spring muscle is bang-coast-bang. For certain parameter values a singular arc solution can be shown to be minimizing. Experimental measurements however indicate that handwriting is not a minimum energy movement.
Resumo:
This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: and element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limted set of associations between it and other elements from teh discourse as well as from general knowledge.