5 resultados para Dynamic storage allocation (Computer science)
em Massachusetts Institute of Technology
Resumo:
Garbage collector performance in LISP systems on custom hardware has been substantially improved by the adoption of lifetime-based garbage collection techniques. To date, however, successful lifetime-based garbage collectors have required special-purpose hardware, or at least privileged access to data structures maintained by the virtual memory system. I present here a lifetime-based garbage collector requiring no special-purpose hardware or virtual memory system support, and discuss its performance.
Resumo:
We report a 75dB, 2.8mW, 100Hz-10kHz envelope detector in a 1.5mm 2.8V CMOS technology. The envelope detector performs input-dc-insensitive voltage-to-currentconverting rectification followed by novel nanopower current-mode peak detection. The use of a subthreshold wide- linear-range transconductor (WLR OTA) allows greater than 1.7Vpp input voltage swings. We show theoretically that this optimal performance is technology-independent for the given topology and may be improved only by spending more power. A novel circuit topology is used to perform 140nW peak detection with controllable attack and release time constants. The lower limits of envelope detection are determined by the more dominant of two effects: The first effect is caused by the inability of amplified high-frequency signals to exceed the deadzone created by exponential nonlinearities in the rectifier. The second effect is due to an output current caused by thermal noise rectification. We demonstrate good agreement of experimentally measured results with theory. The envelope detector is useful in low power bionic implants for the deaf, hearing aids, and speech-recognition front ends. Extension of the envelope detector to higher- frequency applications is straightforward if power consumption is inc
Resumo:
Dynamic optimization has several key advantages. This includes the ability to work on binary code in the absence of sources and to perform optimization across module boundaries. However, it has a significant disadvantage viz-a-viz traditional static optimization: it has a significant runtime overhead. There can be performance gain only if the overhead can be amortized. In this paper, we will quantitatively analyze the runtime overhead introduced by a dynamic optimizer, DynamoRIO. We found that the major overhead does not come from the optimizer's operation. Instead, it comes from the extra code in the code cache added by DynamoRIO. After a detailed analysis, we will propose a method of trace construction that ameliorate the overhead introduced by the dynamic optimizer, thereby reducing the runtime overhead of DynamoRIO. We believe that the result of the study as well as the proposed solution is applicable to other scenarios such as dynamic code translation and managed execution that utilizes a framework similar to that of dynamic optimization.
Resumo:
Many online services access a large number of autonomous data sources and at the same time need to meet different user requirements. It is essential for these services to achieve semantic interoperability among these information exchange entities. In the presence of an increasing number of proprietary business processes, heterogeneous data standards, and diverse user requirements, it is critical that the services are implemented using adaptable, extensible, and scalable technology. The COntext INterchange (COIN) approach, inspired by similar goals of the Semantic Web, provides a robust solution. In this paper, we describe how COIN can be used to implement dynamic online services where semantic differences are reconciled on the fly. We show that COIN is flexible and scalable by comparing it with several conventional approaches. With a given ontology, the number of conversions in COIN is quadratic to the semantic aspect that has the largest number of distinctions. These semantic aspects are modeled as modifiers in a conceptual ontology; in most cases the number of conversions is linear with the number of modifiers, which is significantly smaller than traditional hard-wiring middleware approach where the number of conversion programs is quadratic to the number of sources and data receivers. In the example scenario in the paper, the COIN approach needs only 5 conversions to be defined while traditional approaches require 20,000 to 100 million. COIN achieves this scalability by automatically composing all the comprehensive conversions from a small number of declaratively defined sub-conversions.
Resumo:
"The Structure and Interpretation of Computer Programs" is the entry-level subject in Computer Science at the Massachusetts Institute of Technology. It is required of all students at MIT who major in Electrical Engineering or in Computer Science, as one fourth of the "common core curriculum," which also includes two subjects on circuits and linear systems and a subject on the design of digital systems. We have been involved in the development of this subject since 1978, and we have taught this material in its present form since the fall of 1980 to approximately 600 students each year. Most of these students have had little or no prior formal training in computation, although most have played with computers a bit and a few have had extensive programming or hardware design experience. Our design of this introductory Computer Science subject reflects two major concerns. First we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute. Secondly, we believe that the essential material to be addressed by a subject at this level, is not the syntax of particular programming language constructs, nor clever algorithms for computing particular functions of efficiently, not even the mathematical analysis of algorithms and the foundations of computing, but rather the techniques used to control the intellectual complexity of large software systems.