10 resultados para Concurrent localization and mapping

em Massachusetts Institute of Technology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for localization and positioning in an indoor environment is presented. The method is based on representing the scene as a set of 2D views and predicting the appearances of novel views by linear combinations of the model views. The method is accurate under weak perspective projection. Analysis of this projection as well as experimental results demonstrate that in many cases it is sufficient to accurately describe the scene. When weak perspective approximation is invalid, an iterative solution to account for the perspective distortions can be employed. A simple algorithm for repositioning, the task of returning to a previously visited position defined by a single view, is derived from this method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the concept of Value Stream Analysis and Mapping (VSA/M) as applied to Product Development (PD) efforts. Value Stream Analysis and Mapping is a method of business process improvement. The application of VSA/M began in the manufacturing community. PD efforts provide a different setting for the use of VSA/M. Site visits were made to nine major U.S. aerospace organizations. Interviews, discussions, and participatory events were used to gather data on (1) the sophistication of the tools used in PD process improvement efforts, (2) the lean context of the use of the tools, and (3) success of the efforts. It was found that all three factors were strongly correlated, suggesting success depends on both good tools and lean context. Finally, a general VSA/M method for PD activities is proposed. The method uses modified process mapping tools to analyze and improve process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the concept of Value Stream Analysis and Mapping (VSA/M) as applied to Product Development (PD) efforts. Value Stream Analysis and Mapping is a method of business process improvement. The application of VSA/M began in the manufacturing community. PD efforts provide a different setting for the use of VSA/M. Site visits were made to nine major U.S. aerospace organizations. Interviews, discussions, and participatory events were used to gather data on (1) the sophistication of the tools used in PD process improvement efforts, (2) the lean context of the use of the tools, and (3) success of the efforts. It was found that all three factors were strongly correlated, suggesting success depends on both good tools and lean context. Finally, a general VSA/M method for PD activities is proposed. The method uses modified process mapping tools to analyze and improve process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of matching model and sensory data features in the presence of geometric uncertainty, for the purpose of object localization and identification. The problem is to construct sets of model feature and sensory data feature pairs that are geometrically consistent given that there is uncertainty in the geometry of the sensory data features. If there is no geometric uncertainty, polynomial-time algorithms are possible for feature matching, yet these approaches can fail when there is uncertainty in the geometry of data features. Existing matching and recognition techniques which account for the geometric uncertainty in features either cannot guarantee finding a correct solution, or can construct geometrically consistent sets of feature pairs yet have worst case exponential complexity in terms of the number of features. The major new contribution of this work is to demonstrate a polynomial-time algorithm for constructing sets of geometrically consistent feature pairs given uncertainty in the geometry of the data features. We show that under a certain model of geometric uncertainty the feature matching problem in the presence of uncertainty is of polynomial complexity. This has important theoretical implications by demonstrating an upper bound on the complexity of the matching problem, an by offering insight into the nature of the matching problem itself. These insights prove useful in the solution to the matching problem in higher dimensional cases as well, such as matching three-dimensional models to either two or three-dimensional sensory data. The approach is based on an analysis of the space of feasible transformation parameters. This paper outlines the mathematical basis for the method, and describes the implementation of an algorithm for the procedure. Experiments demonstrating the method are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes some aspects of a computer system for doing medical diagnosis in the specialized field of kidney disease. Because such a system faces the spectre of combinatorial explosion, this discussion concentrates on heuristics which control the number of concurrent hypotheses and efficient "compiled" representations of medical knowledge. In particular, the differential diagnosis of hematuria (blood in the urine) is discussed in detail. A protocol of a simulated doctor/patient interaction is presented and analyzed to determine the crucial structures and processes involved in the diagnosis procedure. The data structure proposed for representing medical information revolves around elementary hypotheses which are activated when certain disposing of findings, activating hypotheses, evaluating hypotheses locally and combining hypotheses globally is examined for its heuristic implications. The thesis attempts to fit the problem of medical diagnosis into the framework of other Artifcial Intelligence problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, local vs. global knowledge and the structure of hypotheses within the world of kidney disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concurrent Smalltalk is the primary language used for programming the J- Machine, a MIMD message-passing computer containing thousands of 36-bit processors connected by a very low latency network. This thesis describes in detail Concurrent Smalltalk and its implementation on the J-Machine, including the Optimist II global optimizing compiler and Cosmos fine-grain parallel operating system. Quantitative and qualitative results are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a perceptual system for a humanoid robot that integrates abilities such as object localization and recognition with the deeper developmental machinery required to forge those competences out of raw physical experiences. It shows that a robotic platform can build up and maintain a system for object localization, segmentation, and recognition, starting from very little. What the robot starts with is a direct solution to achieving figure/ground separation: it simply 'pokes around' in a region of visual ambiguity and watches what happens. If the arm passes through an area, that area is recognized as free space. If the arm collides with an object, causing it to move, the robot can use that motion to segment the object from the background. Once the robot can acquire reliable segmented views of objects, it learns from them, and from then on recognizes and segments those objects without further contact. Both low-level and high-level visual features can also be learned in this way, and examples are presented for both: orientation detection and affordance recognition, respectively. The motivation for this work is simple. Training on large corpora of annotated real-world data has proven crucial for creating robust solutions to perceptual problems such as speech recognition and face detection. But the powerful tools used during training of such systems are typically stripped away at deployment. Ideally they should remain, particularly for unstable tasks such as object detection, where the set of objects needed in a task tomorrow might be different from the set of objects needed today. The key limiting factor is access to training data, but as this thesis shows, that need not be a problem on a robotic platform that can actively probe its environment, and carry out experiments to resolve ambiguity. This work is an instance of a general approach to learning a new perceptual judgment: find special situations in which the perceptual judgment is easy and study these situations to find correlated features that can be observed more generally.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report outlines the problem of intelligent failure recovery in a problem-solver for electrical design. We want our problem solver to learn as much as it can from its mistakes. Thus we cast the engineering design process on terms of Problem Solving by Debugging Almost-Right Plans, a paradigm for automatic problem solving based on the belief that creation and removal of "bugs" is an unavoidable part of the process of solving a complex problem. The process of localization and removal of bugs called for by the PSBDARP theory requires an approach to engineering analysis in which every result has a justification which describes the exact set of assumptions it depends upon. We have developed a program based on Analysis by Propagation of Constraints which can explain the basis of its deductions. In addition to being useful to a PSBDARP designer, these justifications are used in Dependency-Directed Backtracking to limit the combinatorial search in the analysis routines. Although the research we will describe is explicitly about electrical circuits, we believe that similar principles and methods are employed by other kinds of engineers, including computer programmers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fueled by ever-growing genomic information and rapid developments of proteomics–the large scale analysis of proteins and mapping its functional role has become one of the most important disciplines for characterizing complex cell function. For building functional linkages between the biomolecules, and for providing insight into the mechanisms of biological processes, last decade witnessed the exploration of combinatorial and chip technology for the detection of bimolecules in a high throughput and spatially addressable fashion. Among the various techniques developed, the protein chip technology has been rapid. Recently we demonstrated a new platform called “Spacially addressable protein array” (SAPA) to profile the ligand receptor interactions. To optimize the platform, the present study investigated various parameters such as the surface chemistry and role of additives for achieving high density and high-throughput detection with minimal nonspecific protein adsorption. In summary the present poster will address some of the critical challenges in protein micro array technology and the process of fine tuning to achieve the optimum system for solving real biological problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While protein microarray technology has been successful in demonstrating its usefulness for large scale high-throughput proteome profiling, performance of antibody/antigen microarrays has been only moderately productive. Immobilization of either the capture antibodies or the protein samples on solid supports has severe drawbacks. Denaturation of the immobilized proteins as well as inconsistent orientation of antibodies/ligands on the arrays can lead to erroneous results. This has prompted a number of studies to address these challenges by immobilizing proteins on biocompatible surfaces, which has met with limited success. Our strategy relates to a multiplexed, sensitive and high-throughput method for the screening quantification of intracellular signalling proteins from a complex mixture of proteins. Each signalling protein to be monitored has its capture moiety linked to a specific oligo ‘tag’. The array involves the oligonucleotide hybridization-directed localization and identification of different signalling proteins simultaneously, in a rapid and easy manner. Antibodies have been used as the capture moieties for specific identification of each signaling protein. The method involves covalently partnering each antibody/protein molecule with a unique DNA or DNA derivatives oligonucleotide tag that directs the antibody to a unique site on the microarray due to specific hybridization with a complementary tag-probe on the array. Particular surface modifications and optimal conditions allowed high signal to noise ratio which is essential to the success of this approach.