20 resultados para Erskine, John, 1721-1803.

em Massachusetts Institute of Technology


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Message-Driven Processor is a node of a large-scale multiprocessor being developed by the Concurrent VLSI Architecture Group. It is intended to support fine-grained, message passing, parallel computation. It contains several novel architectural features, such as a low-latency network interface, extensive type-checking hardware, and on-chip memory that can be used as an associative lookup table. This document is a programmer's guide to the MDP. It describes the processor's register architecture, instruction set, and the data types supported by the processor. It also details the MDP's message sending and exception handling facilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We formulate and interpret several multi-modal registration methods in the context of a unified statistical and information theoretic framework. A unified interpretation clarifies the implicit assumptions of each method yielding a better understanding of their relative strengths and weaknesses. Additionally, we discuss a generative statistical model from which we derive a novel analysis tool, the "auto-information function", as a means of assessing and exploiting the common spatial dependencies inherent in multi-modal imagery. We analytically derive useful properties of the "auto-information" as well as verify them empirically on multi-modal imagery. Among the useful aspects of the "auto-information function" is that it can be computed from imaging modalities independently and it allows one to decompose the search space of registration problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, two different sets of experiments are described. The first is an exploration of the microscopic superfluidity of dilute gaseous Bose- Einstein condensates. The second set of experiments were performed using transported condensates in a new BEC apparatus. Superfluidity was probed by moving impurities through a trapped condensate. The impurities were created using an optical Raman transition, which transferred a small fraction of the atoms into an untrapped hyperfine state. A dramatic reduction in the collisions between the moving impurities and the condensate was observed when the velocity of the impurities was close to the speed of sound of the condensate. This reduction was attributed to the superfluid properties of a BEC. In addition, we observed an increase in the collisional density as the number of impurity atoms increased. This enhancement is an indication of bosonic stimulation by the occupied final states. This stimulation was observed both at small and large velocities relative to the speed of sound. A theoretical calculation of the effect of finite temperature indicated that collision rate should be enhanced at small velocities due to thermal excitations. However, in the current experiments we were insensitive to this effect. Finally, the factor of two between the collisional rate between indistinguishable and distinguishable atoms was confirmed. A new BEC apparatus that can transport condensates using optical tweezers was constructed. Condensates containing 10-15 million sodium atoms were produced in 20 s using conventional BEC production techniques. These condensates were then transferred into an optical trap that was translated from the ‘production chamber’ into a separate vacuum chamber: the ‘science chamber’. Typically, we transferred 2-3 million condensed atoms in less than 2 s. This transport technique avoids optical and mechanical constrainsts of conventional condensate experiments and allows for the possibility of novel experiments. In the first experiments using transported BEC, we loaded condensed atoms from the optical tweezers into both macroscopic and miniaturized magnetic traps. Using microfabricated wires on a silicon chip, we observed excitation-less propagation of a BEC in a magnetic waveguide. The condensates fragmented when brought very close to the wire surface indicating that imperfections in the fabrication process might limit future experiments. Finally, we generated a continuous BEC source by periodically replenishing a condensate held in an optical reservoir trap using fresh condensates delivered using optical tweezers. More than a million condensed atoms were always present in the continuous source, raising the possibility of realizing a truly continuous atom lase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report addresses the problem of fault tolerance to system failures for database systems that are to run on highly concurrent computers. It assumes that, in general, an application may have a wide distribution in the lifetimes of its transactions. Logging remains the method of choice for ensuring fault tolerance. Generational garbage collection techniques manage the limited disk space reserved for log information; this technique does not require periodic checkpoints and is well suited for applications with a broad range of transaction lifetimes. An arbitrarily large collection of parallel log streams provide the necessary disk bandwidth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report documents our work in exploring active balance for dynamic legged systems for the period from September 1985 through September 1989. The purpose of this research is to build a foundation of knowledge that can lead both to the construction of useful legged vehicles and to a better understanding of animal locomotion. In this report we focus on the control of biped locomotion, the use of terrain footholds, running at high speed, biped gymnastics, symmetry in running, and the mechanical design of articulated legs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reconstructing a surface from sparse sensory data is a well known problem in computer vision. Early vision modules typically supply sparse depth, orientation and discontinuity information. The surface reconstruction module incorporates these sparse and possibly conflicting measurements of a surface into a consistent, dense depth map. The coupled depth/slope model developed here provides a novel computational solution to the surface reconstruction problem. This method explicitly computes dense slope representation as well as dense depth representations. This marked change from previous surface reconstruction algorithms allows a natural integration of orientation constraints into the surface description, a feature not easily incorporated into earlier algorithms. In addition, the coupled depth/ slope model generalizes to allow for varying amounts of smoothness at different locations on the surface. This computational model helps conceptualize the problem and leads to two possible implementations- analog and digital. The model can be implemented as an electrical or biological analog network since the only computations required at each locally connected node are averages, additions and subtractions. A parallel digital algorithm can be derived by using finite difference approximations. The resulting system of coupled equations can be solved iteratively on a mesh-pf-processors computer, such as the Connection Machine. Furthermore, concurrent multi-grid methods are designed to speed the convergence of this digital algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Handwriting production is viewed as a constrained modulation of an underlying oscillatory process. Coupled oscillations in horizontal and vertical directions produce letter forms, and when superimposed on a rightward constant velocity horizontal sweep result in spatially separated letters. Modulation of the vertical oscillation is responsible for control of letter height, either through altering the frequency or altering the acceleration amplitude. Modulation of the horizontal oscillation is responsible for control of corner shape through altering phase or amplitude. The vertical velocity zero crossing in the velocity space diagram is important from the standpoint of control. Changing the horizontal velocity value at this zero crossing controls corner shape, and such changes can be effected through modifying the horizontal oscillation amplitude and phase. Changing the slope at this zero crossing controls writing slant; this slope depends on the horizontal and vertical velocity zero amplitudes and on the relative phase difference. Letter height modulation is also best applied at the vertical velocity zero crossing to preserve an even baseline. The corner shape and slant constraints completely determine the amplitude and phase relations between the two oscillations. Under these constraints interletter separation is not an independent parameter. This theory applies generally to a number of acceleration oscillation patterns such as sinusoidal, rectangular and trapezoidal oscillations. The oscillation theory also provides an explanation for how handwriting might degenerate with speed. An implementation of the theory in the context of the spring muscle model is developed. Here sinusoidal oscillations arise from a purely mechanical sources; orthogonal antagonistic spring pairs generate particular cycloids depending on the initial conditions. Modulating between cycloids can be achieved by changing the spring zero settings at the appropriate times. Frequency can be modulated either by shifting between coactivation and alternating activation of the antagonistic springs or by presuming variable spring constant springs. An acceleration and position measuring apparatus was developed for measurements of human handwriting. Measurements of human writing are consistent with the oscillation theory. It is shown that the minimum energy movement for the spring muscle is bang-coast-bang. For certain parameter values a singular arc solution can be shown to be minimizing. Experimental measurements however indicate that handwriting is not a minimum energy movement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An approach towards shape description, based on prototype modification and generalized cylinders, has been developed and applied to the object domains pottery and polyhedra: (1) A program describes and identifies pottery from vase outlines entered as lists of points. The descriptions have been modeled after descriptions by archeologists, with the result that identifications made by the program are remarkably consisten with those of the archeologists. It has been possible to quantify their shape descriptors, which are everyday terms in our language applied to many sorts of objects besides pottery, so that the resulting descriptions seem very natural. (2) New parsing strategies for polyhedra overcome some limitations of previous work. A special feature is that the processes of parsing and identification are carried out simultaneously.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of detecting intensity changes in images is canonical in vision. Edge detection operators are typically designed to optimally estimate first or second derivative over some (usually small) support. Other criteria such as output signal to noise ratio or bandwidth have also been argued for. This thesis is an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of an edge operator. Variational techniques are used to find a solution over the space of all linear shift invariant operators. The first criterion is that the detector have low probability of error i.e. failing to mark edges or falsely marking non-edges. The second is that the marked points should be as close as possible to the centre of the true edge. The third criterion is that there should be low probability of more than one response to a single edge. The technique is used to find optimal operators for step edges and for extended impulse profiles (ridges or valleys in two dimensions). The extension of the one dimensional operators to two dimentions is then discussed. The result is a set of operators of varying width, length and orientation. The problem of combining these outputs into a single description is discussed, and a set of heuristics for the integration are given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a new actuator system consisting of a micro-actuator and a macro-actuator coupled in parallel via a compliant transmission. The system is called the Parallel Coupled Micro-Macro Actuator, or PaCMMA. In this system, the micro-actuator is capable of high bandwidth force control due to its low mass and direct-drive connection to the output shaft. The compliant transmission of the macro-actuator reduces the impedance (stiffness) at the output shaft and increases the dynamic range of force. Performance improvement over single actuator systems was expected in force control, impedance control, force distortion and reduction of transient impact forces. A set of quantitative measures is proposed and the actuator system is evaluated against them: Force Control Bandwidth, Position Bandwidth, Dynamic Range, Impact Force, Impedance ("Backdriveability'"), Force Distortion and Force Performance Space. Several theoretical performance limits are derived from the saturation limits of the system. A control law is proposed and control system performance is compared to the theoretical limits. A prototype testbed was built using permanenent magnet motors and an experimental comparison was performed between this actuator concept and two single actuator systems. The following performance was observed: Force bandwidth of 56Hz, Torque Dynamic Range of 800:1, Peak Torque of 1040mNm, Minimum Torque of 1.3mNm. Peak Impact Force was reduced by an order of magnitude. Distortion at small amplitudes was reduced substantially. Backdriven impedance was reduced by 2-3 orders of magnitude. This actuator system shows promise for manipulator design as well as psychophysical tests of human performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This volume of the final report documents the technical work performed from December 1998 through December 2002 under Cooperative Agreement F33615-97-2-5153 executed between the U.S. Air Force, Air Force Research Laboratory, Materials and Manufacturing Directorate, Manufacturing Technology Division (AFRL/MLM) and the McDonnell Douglas Corporation, a wholly-owned subsidiary of The Boeing Company. The work was accomplished by The Boeing Company, Phantom Works, Huntington Beach, St. Louis, and Seattle; Ford Motor Company; Integral Inc.; Sloan School of Management in the Massachusetts Institute of Technology; Pratt & Whitney; and Central State University in Xenia, Ohio and in association with Raytheon Corporation. The LeanTEC program manager for AFRL is John Crabill of AFRL / MLMP and The Boeing Company program manager is Ed Shroyer of Boeing Phantom Works in Huntington Beach, CA. Financial performance under this contract is documented in the Financial Volume of the final report.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing fuel taxes play a major role in determining the welfare effects of exempting the transportation sector from measures to control greenhouse gases. To study this phenomenon we modify the MIT Emissions Prediction and Policy Analysis (EPPA) model to disaggregate the household transportation sector. This improvement requires an extension of the GTAP data set that underlies the model. The revised and extended facility is then used to compare economic costs of cap-and-trade systems differentiated by sector, focusing on two regions: the USA where the fuel taxes are low, and Europe where the fuel taxes are high. We find that the interplay between carbon policies and pre-existing taxes leads to different results in these regions: in the USA exemption of transport from such a system would increase the welfare cost of achieving a national emissions target, while in Europe such exemptions will correct pre-existing distortions and reduce the cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the often-studied problem of sorting, for a parallel computer. Given an input array distributed evenly over p processors, the task is to compute the sorted output array, also distributed over the p processors. Many existing algorithms take the approach of approximately load-balancing the output, leaving each processor with Θ(n/p) elements. However, in many cases, approximate load-balancing leads to inefficiencies in both the sorting itself and in further uses of the data after sorting. We provide a deterministic parallel sorting algorithm that uses parallel selection to produce any output distribution exactly, particularly one that is perfectly load-balanced. Furthermore, when using a comparison sort, this algorithm is 1-optimal in both computation and communication. We provide an empirical study that illustrates the efficiency of exact data splitting, and shows an improvement over two sample sort algorithms.