10 resultados para Computing device mechanism
em Massachusetts Institute of Technology
Resumo:
General-purpose computing devices allow us to (1) customize computation after fabrication and (2) conserve area by reusing expensive active circuitry for different functions in time. We define RP-space, a restricted domain of the general-purpose architectural space focussed on reconfigurable computing architectures. Two dominant features differentiate reconfigurable from special-purpose architectures and account for most of the area overhead associated with RP devices: (1) instructions which tell the device how to behave, and (2) flexible interconnect which supports task dependent dataflow between operations. We can characterize RP-space by the allocation and structure of these resources and compare the efficiencies of architectural points across broad application characteristics. Conventional FPGAs fall at one extreme end of this space and their efficiency ranges over two orders of magnitude across the space of application characteristics. Understanding RP-space and its consequences allows us to pick the best architecture for a task and to search for more robust design points in the space. Our DPGA, a fine- grained computing device which adds small, on-chip instruction memories to FPGAs is one such design point. For typical logic applications and finite- state machines, a DPGA can implement tasks in one-third the area of a traditional FPGA. TSFPGA, a variant of the DPGA which focuses on heavily time-switched interconnect, achieves circuit densities close to the DPGA, while reducing typical physical mapping times from hours to seconds. Rigid, fabrication-time organization of instruction resources significantly narrows the range of efficiency for conventional architectures. To avoid this performance brittleness, we developed MATRIX, the first architecture to defer the binding of instruction resources until run-time, allowing the application to organize resources according to its needs. Our focus MATRIX design point is based on an array of 8-bit ALU and register-file building blocks interconnected via a byte-wide network. With today's silicon, a single chip MATRIX array can deliver over 10 Gop/s (8-bit ops). On sample image processing tasks, we show that MATRIX yields 10-20x the computational density of conventional processors. Understanding the cost structure of RP-space helps us identify these intermediate architectural points and may provide useful insight more broadly in guiding our continual search for robust and efficient general-purpose computing structures.
Resumo:
The dynamic power requirement of CMOS circuits is rapidly becoming a major concern in the design of personal information systems and large computers. In this work we present a number of new CMOS logic families, Charge Recovery Logic (CRL) as well as the much improved Split-Level Charge Recovery Logic (SCRL), within which the transfer of charge between the nodes occurs quasistatically. Operating quasistatically, these logic families have an energy dissipation that drops linearly with operating frequency, i.e., their power consumption drops quadratically with operating frequency as opposed to the linear drop of conventional CMOS. The circuit techniques in these new families rely on constructing an explicitly reversible pipelined logic gate, where the information necessary to recover the energy used to compute a value is provided by computing its logical inverse. Information necessary to uncompute the inverse is available from the subsequent inverse logic stage. We demonstrate the low energy operation of SCRL by presenting the results from the testing of the first fully quasistatic 8 x 8 multiplier chip (SCRL-1) employing SCRL circuit techniques.
Resumo:
In this thesis I present a language for instructing a sheet of identically-programmed, flexible, autonomous agents (``cells'') to assemble themselves into a predetermined global shape, using local interactions. The global shape is described as a folding construction on a continuous sheet, using a set of axioms from paper-folding (origami). I provide a means of automatically deriving the cell program, executed by all cells, from the global shape description. With this language, a wide variety of global shapes and patterns can be synthesized, using only local interactions between identically-programmed cells. Examples include flat layered shapes, all plane Euclidean constructions, and a variety of tessellation patterns. In contrast to approaches based on cellular automata or evolution, the cell program is directly derived from the global shape description and is composed from a small number of biologically-inspired primitives: gradients, neighborhood query, polarity inversion, cell-to-cell contact and flexible folding. The cell programs are robust, without relying on regular cell placement, global coordinates, or synchronous operation and can tolerate a small amount of random cell death. I show that an average cell neighborhood of 15 is sufficient to reliably self-assemble complex shapes and geometric patterns on randomly distributed cells. The language provides many insights into the relationship between local and global descriptions of behavior, such as the advantage of constructive languages, mechanisms for achieving global robustness, and mechanisms for achieving scale-independent shapes from a single cell program. The language suggests a mechanism by which many related shapes can be created by the same cell program, in the manner of D'Arcy Thompson's famous coordinate transformations. The thesis illuminates how complex morphology and pattern can emerge from local interactions, and how one can engineer robust self-assembly.
Resumo:
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.
Resumo:
Conventional floating gate non-volatile memories (NVMs) present critical issues for device scalability beyond the sub-90 nm node, such as gate length and tunnel oxide thickness reduction. Nanocrystalline germanium (nc-Ge) quantum dot flash memories are fully CMOS compatible technology based on discrete isolated charge storage nodules which have the potential of pushing further the scalability of conventional NVMs. Quantum dot memories offer lower operating voltages as compared to conventional floating-gate (FG) Flash memories due to thinner tunnel dielectrics which allow higher tunneling probabilities. The isolated charge nodules suppress charge loss through lateral paths, thereby achieving a superior charge retention time. Despite the considerable amount of efforts devoted to the study of nanocrystal Flash memories, the charge storage mechanism remains obscure. Interfacial defects of the nanocrystals seem to play a role in charge storage in recent studies, although storage in the nanocrystal conduction band by quantum confinement has been reported earlier. In this work, a single transistor memory structure with threshold voltage shift, Vth, exceeding ~1.5 V corresponding to interface charge trapping in nc-Ge, operating at 0.96 MV/cm, is presented. The trapping effect is eliminated when nc-Ge is synthesized in forming gas thus excluding the possibility of quantum confinement and Coulomb blockade effects. Through discharging kinetics, the model of deep level trap charge storage is confirmed. The trap energy level is dependent on the matrix which confines the nc-Ge.
Resumo:
We present a low cost and easily deployed infrastructure for location aware computing that is built using standard Bluetooth® technologies and personal computers. Mobile devices are able to determine their location to room-level granularity with existing bluetooth technology, and to even greater resolution with the use of the recently adopted bluetooth 1.2 specification, all while maintaining complete anonymity. Various techniques for improving the speed and resolution of the system are described, along with their tradeoffs in privacy. The system is trivial to implement on a large scale – our network covering 5,000 square meters was deployed by a single student over the course of a few days at a cost of less than US$1,000.
Resumo:
Electroosmotic flow is a convenient mechanism for transporting polar fluid in a microfluidic device. The flow is generated through the application of an external electric field that acts on the free charges that exists in a thin Debye layer at the channel walls. The charge on the wall is due to the chemistry of the solid-fluid interface, and it can vary along the channel, e.g. due to modification of the wall. This investigation focuses on the simulation of the electroosmotic flow (EOF) profile in a cylindrical microchannel with step change in zeta potential. The modified Navier-Stoke equation governing the velocity field and a non-linear two-dimensional Poisson-Boltzmann equation governing the electrical double-layer (EDL) field distribution are solved numerically using finite control-volume method. Continuities of flow rate and electric current are enforced resulting in a non-uniform electrical field and pressure gradient distribution along the channel. The resulting parabolic velocity distribution at the junction of the step change in zeta potential, which is more typical of a pressure-driven velocity flow profile, is obtained.
Resumo:
Performance and manufacturability are two important issues that must be taken into account during MEMS design. Existing MEMS design models or systems follow a process-driven design paradigm, that is, design starts from the specification of process sequence or the customization of foundry-ready process template. There has been essentially no methodology or model that supports generic, high-level design synthesis for MEMS conceptual design. As a result, there lacks a basis for specifying the initial process sequences. To address this problem, this paper proposes a performance-driven, microfabrication-oriented methodology for MEMS conceptual design. A unified behaviour representation method is proposed which incorporates information of both physical interactions and chemical/biological/other reactions. Based on this method, a behavioural process based design synthesis model is proposed, which exploits multidisciplinary phenomena for design solutions, including both the structural components and their configuration for the MEMS device, as well as the necessary substances for the chemical/biological/other reactions. The model supports both forward and backward synthetic search for suitable phenomena. To ensure manufacturability, a strategy of using microfabrication-oriented phenomena as design knowledge is proposed, where the phenomena are developed from existing MEMS devices that have associated MEMS-specific microfabrication processes or foundry-ready process templates. To test the applicability of the proposed methodology, the paper also studies microfluidic device design and uses a micro-pump design for the case study.
Resumo:
Memory errors are a common cause of incorrect software execution and security vulnerabilities. We have developed two new techniques that help software continue to execute successfully through memory errors: failure-oblivious computing and boundless memory blocks. The foundation of both techniques is a compiler that generates code that checks accesses via pointers to detect out of bounds accesses. Instead of terminating or throwing an exception, the generated code takes another action that keeps the program executing without memory corruption. Failure-oblivious code simply discards invalid writes and manufactures values to return for invalid reads, enabling the program to continue its normal execution path. Code that implements boundless memory blocks stores invalid writes away in a hash table to return as the values for corresponding out of bounds reads. he net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have implemented both techniques and acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander).With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking web sites. Both failure-oblivious computing and boundless memory blocks eliminate these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities.
Resumo:
This paper proposes three tests to determine whether a given nonlinear device noise model is in agreement with accepted thermodynamic principles. These tests are applied to several models. One conclusion is that every Gaussian noise model for any nonlinear device predicts thermodynamically impossible circuit behavior: these models should be abandoned. But the nonlinear shot-noise model predicts thermodynamically acceptable behavior under a constraint derived here. Further, this constraint specifies the current noise amplitude at each operating point from knowledge of the device v - i curve alone. For the Gaussian and shot-noise models, this paper shows how the thermodynamic requirements can be reduced to concise mathematical tests involving no approximatio