982 resultados para motion picture
Resumo:
We present psychophysical experiments that measure the accuracy of perceived 3D structure derived from relative image motion. The experiments are motivated by Ullman's incremental rigidity scheme, which builds up 3D structure incrementally over an extended time. Our main conclusions are: first, the human system derives an accurate model of the relative depths of moving points, even in the presence of noise; second, the accuracy of 3D structure improves with time, eventually reaching a plateau; and third, the 3D structure currently perceived depends on previous 3D models. Through computer simulations, we relate the psychophysical observations to the behavior of Ullman's model.
Resumo:
We address the computational role that the construction of a complete surface representation may play in the recovery of 3--D structure from motion. We present a model that combines a feature--based structure--from- -motion algorithm with smooth surface interpolation. This model can represent multiple surfaces in a given viewing direction, incorporates surface constraints from object boundaries, and groups image features using their 2--D image motion. Computer simulations relate the model's behavior to perceptual observations. In a companion paper, we discuss further perceptual experiments regarding the role of surface reconstruction in the human recovery of 3--D structure from motion.
Resumo:
Earlier, we introduced a direct method called fixation for the recovery of shape and motion in the general case. The method uses neither feature correspondence nor optical flow. Instead, it directly employs the spatiotemporal gradients of image brightness. This work reports the experimental results of applying some of our fixation algorithms to a sequence of real images where the motion is a combination of translation and rotation. These results show that parameters such as the fization patch size have crucial effects on the estimation of some motion parameters. Some of the critical issues involved in the implementaion of our autonomous motion vision system are also discussed here. Among those are the criteria for automatic choice of an optimum size for the fixation patch, and an appropriate location for the fixation point which result in good estimates for important motion parameters. Finally, a calibration method is described for identifying the real location of the rotation axis in imaging systems.
Resumo:
A key question regarding primate visual motion perception is whether the motion of 2D patterns is recovered by tracking distinctive localizable features [Lorenceau and Gorea, 1989; Rubin and Hochstein, 1992] or by integrating ambiguous local motion estimates [Adelson and Movshon, 1982; Wilson and Kim, 1992]. For a two-grating plaid pattern, this translates to either tracking the grating intersections or to appropriately combining the motion estimates for each grating. Since both component and feature information are simultaneously available in any plaid pattern made of contrast defined gratings, it is unclear how to determine which of the two schemes is actually used to recover the plaid"s motion. To address this problem, we have designed a plaid pattern made with subjective, rather than contrast defined, gratings. The distinguishing characteristic of such a plaid pattern is that it contains no contrast defined intersections that may be tracked. We find that notwithstanding the absence of such features, observers can accurately recover the pattern velocity. Additionally we show that the hypothesis of tracking "illusory features" to estimate pattern motion does not stand up to experimental test. These results present direct evidence in support of the idea that calls for the integration of component motions over the one that mandates tracking localized features to recover 2D pattern motion. The localized features, we suggest, are used primarily as providers of grouping information - which component motion signals to integrate and which not to.
Resumo:
Compliant control is a standard method for performing fine manipulation tasks, like grasping and assembly, but it requires estimation of the state of contact between the robot arm and the objects involved. Here we present a method to learn a model of the movement from measured data. The method requires little or no prior knowledge and the resulting model explicitly estimates the state of contact. The current state of contact is viewed as the hidden state variable of a discrete HMM. The control dependent transition probabilities between states are modeled as parametrized functions of the measurement We show that their parameters can be estimated from measurements concurrently with the estimation of the parameters of the movement in each state of contact. The learning algorithm is a variant of the EM procedure. The E step is computed exactly; solving the M step exactly would require solving a set of coupled nonlinear algebraic equations in the parameters. Instead, gradient ascent is used to produce an increase in likelihood.
Resumo:
We describe a new method for motion estimation and 3D reconstruction from stereo image sequences obtained by a stereo rig moving through a rigid world. We show that given two stereo pairs one can compute the motion of the stereo rig directly from the image derivatives (spatial and temporal). Correspondences are not required. One can then use the images from both pairs combined to compute a dense depth map. The motion estimates between stereo pairs enable us to combine depth maps from all the pairs in the sequence to form an extended scene reconstruction and we show results from a real image sequence. The motion computation is a linear least squares computation using all the pixels in the image. Areas with little or no contrast are implicitly weighted less so one does not have to explicitly apply a confidence measure.
Resumo:
Stereopsis and motion parallax are two methods for recovering three dimensional shape. Theoretical analyses of each method show that neither alone can recover rigid 3D shapes correctly unless other information, such as perspective, is included. The solutions for recovering rigid structure from motion have a reflection ambiguity; the depth scale of the stereoscopic solution will not be known unless the fixation distance is specified in units of interpupil separation. (Hence the configuration will appear distorted.) However, the correct configuration and the disposition of a rigid 3D shape can be recovered if stereopsis and motion are integrated, for then a unique solution follows from a set of linear equations. The correct interpretation requires only three points and two stereo views.
Resumo:
This thesis examines a complete design framework for a real-time, autonomous system with specialized VLSI hardware for computing 3-D camera motion. In the proposed architecture, the first step is to determine point correspondences between two images. Two processors, a CCD array edge detector and a mixed analog/digital binary block correlator, are proposed for this task. The report is divided into three parts. Part I covers the algorithmic analysis; part II describes the design and test of a 32$\time $32 CCD edge detector fabricated through MOSIS; and part III compares the design of the mixed analog/digital correlator to a fully digital implementation.
Resumo:
This thesis shows how to detect boundaries on the basis of motion information alone. The detection is performed in two stages: (i) the local estimation of motion discontinuities and of the visual flowsfield; (ii) the extraction of complete boundaries belonging to differently moving objects. For the first stage, three new methods are presented: the "Bimodality Tests,'' the "Bi-distribution Test,'' and the "Dynamic Occlusion Method.'' The second stage consists of applying the "Structural Saliency Method,'' by Sha'ashua and Ullman to extract complete and unique boundaries from the output of the first stage. The developed methods can successfully segment complex motion sequences.
Resumo:
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has received little previous attention. We present a framework for computing motion strategies that are guaranteed to succeed in the presence of all three kinds of uncertainty. The motion strategies comprise sensor-based gross motions, compliant motions, and simple pushing motions.
Resumo:
This thesis presents a new high level robot programming system. The programming system can be used to construct strategies consisting of compliant motions, in which a moving robot slides along obstacles in its environment. The programming system is referred to as high level because the user is spared of many robot-level details, such as the specification of conditional tests, motion termination conditions, and compliance parameters. Instead, the user specifies task-level information, including a geometric model of the robot and its environment. The user may also have to specify some suggested motions. There are two main system components. The first component is an interactive teaching system which accepts motion commands from a user and attempts to build a compliant motion strategy using the specified motions as building blocks. The second component is an autonomous compliant motion planner, which is intended to spare the user from dealing with "simple" problems. The planner simplifies the representation of the environment by decomposing the configuration space of the robot into a finite state space, whose states are vertices, edges, faces, and combinations thereof. States are inked to each other by arcs, which represent reliable compliant motions. Using best first search, states are expanded until a strategy is found from the start state to a global state. This component represents one of the first implemented compliant motion planners. The programming system has been implemented on a Symbolics 3600 computer, and tested on several examples. One of the resulting compliant motion strategies was successfully executed on an IBM 7565 robot manipulator.
Resumo:
Reasoning about motion is an important part of our commonsense knowledge, involving fluent spatial reasoning. This work studies the qualitative and geometric knowledge required to reason in a world that consists of balls moving through space constrained by collisions with surfaces, including dissipative forces and multiple moving objects. An analog geometry representation serves the program as a diagram, allowing many spatial questions to be answered by numeric calculation. It also provides the foundation for the construction and use of place vocabulary, the symbolic descriptions of space required to do qualitative reasoning about motion in the domain. The actual motion of a ball is described as a network consisting of descriptions of qualitatively distinct types of motion. Implementing the elements of these networks in a constraint language allows the same elements to be used for both analysis and simulation of motion. A qualitative description of the actual motion is also used to check the consistency of assumptions about motion. A process of qualitative simulation is used to describe the kinds of motion possible from some state. The ambiguity inherent in such a description can be reduced by assumptions about physical properties of the ball or assumptions about its motion. Each assumption directly rules out some kinds of motion, but other knowledge is required to determine the indirect consequences of making these assumptions. Some of this knowledge is domain dependent and relies heavily on spatial descriptions.
Resumo:
What are the characteristics of the process by which an intent is transformed into a plan and then a program? How is a program debugged? This paper analyzes these questions in the context of understanding simple turtle programs. To understand and debug a program, a description of its intent is required. For turtle programs, this is a model of the desired geometric picture. a picture language is provided for this purpose. Annotation is necessary for documenting the performance of a program in such a way that the system can examine the procedures behavior as well as consider hypothetical lines of development due to tentative debugging edits. A descriptive framework representing both causality and teleology is developed. To understand the relation between program and model, the plan must be known. The plan is a description of the methodology for accomplishing the model. Concepts are explicated for translating the global intent of a declarative model into the local imperative code of a program. Given the plan, model and program, the system can interpret the picture and recognize inconsistencies. The description of the discrepancies between the picture actually produced by the program and the intended scene is the input to a debugging system. Repair of the program is based on a combination of general debugging techniques and specific fixing knowledge associated with the geometric model primitives. In both the plan and repairing the bugs, the system exhibits an interesting style of analysis. It is capable of debugging itself and reformulating its analysis of a plan or bug in response to self-criticism. In this fashion, it can qualitatively reformulate its theory of the program or error to account for surprises or anomalies.
Resumo:
The motion planning problem is of central importance to the fields of robotics, spatial planning, and automated design. In robotics we are interested in the automatic synthesis of robot motions, given high-level specifications of tasks and geometric models of the robot and obstacles. The Mover's problem is to find a continuous, collision-free path for a moving object through an environment containing obstacles. We present an implemented algorithm for the classical formulation of the three-dimensional Mover's problem: given an arbitrary rigid polyhedral moving object P with three translational and three rotational degrees of freedom, find a continuous, collision-free path taking P from some initial configuration to a desired goal configuration. This thesis describes the first known implementation of a complete algorithm (at a given resolution) for the full six degree of freedom Movers' problem. The algorithm transforms the six degree of freedom planning problem into a point navigation problem in a six-dimensional configuration space (called C-Space). The C-Space obstacles, which characterize the physically unachievable configurations, are directly represented by six-dimensional manifolds whose boundaries are five dimensional C-surfaces. By characterizing these surfaces and their intersections, collision-free paths may be found by the closure of three operators which (i) slide along 5-dimensional intersections of level C-Space obstacles; (ii) slide along 1- to 4-dimensional intersections of level C-surfaces; and (iii) jump between 6 dimensional obstacles. Implementing the point navigation operators requires solving fundamental representational and algorithmic questions: we will derive new structural properties of the C-Space constraints and shoe how to construct and represent C-Surfaces and their intersection manifolds. A definition and new theoretical results are presented for a six-dimensional C-Space extension of the generalized Voronoi diagram, called the C-Voronoi diagram, whose structure we relate to the C-surface intersection manifolds. The representations and algorithms we develop impact many geometric planning problems, and extend to Cartesian manipulators with six degrees of freedom.
Resumo:
Robots must successfully plan and execute tasks in the presence of uncertainty. Uncertainty arises from errors in modeling, sensing, and control. Planning in the presence of uncertainty constitutes one facet of the general motion planning problem in robotics. This problem is concerned with the automatic synthesis of motion strategies from high level task specification and geometric models of environments. In order to develop successful motion strategies, it is necessary to understand the effect of uncertainty on the geometry of object interactions. Object interactions, both static and dynamic, may be represented in geometrical terms. This thesis investigates geometrical tools for modeling and overcoming uncertainty. The thesis describes an algorithm for computing backprojections o desired task configurations. Task goals and motion states are specified in terms of a moving object's configuration space. Backprojections specify regions in configuration space from which particular motions are guaranteed to accomplish a desired task. The backprojection algorithm considers surfaces in configuration space that facilitate sliding towards the goal, while avoiding surfaces on which motions may prematurely halt. In executing a motion for a backprojection region, a plan executor must be able to recognize that a desired task has been accomplished. Since sensors are subject to uncertainty, recognition of task success is not always possible. The thesis considers the structure of backprojection regions and of task goals that ensures goal recognizability. The thesis also develops a representation of friction in configuration space, in terms of a friction cone analogous to the real space friction cone. The friction cone provides the backprojection algorithm with a geometrical tool for determining points at which motions may halt.