120 resultados para Compression ignition engines.
Resumo:
Packet forwarding is a memory-intensive application requiring multiple accesses through a trie structure. The efficiency of a cache for this application critically depends on the placement function to reduce conflict misses. Traditional placement functions use a one-level mapping that naively partitions trie-nodes into cache sets. However, as a significant percentage of trie nodes are not useful, these schemes suffer from a non-uniform distribution of useful nodes to sets. This in turn results in increased conflict misses. Newer organizations such as variable associativity caches achieve flexibility in placement at the expense of increased hit-latency. This makes them unsuitable for L1 caches.We propose a novel two-level mapping framework that retains the hit-latency of one-level mapping yet incurs fewer conflict misses. This is achieved by introducing a secondlevel mapping which reorganizes the nodes in the naive initial partitions into refined partitions with near-uniform distribution of nodes. Further as this remapping is accomplished by simply adapting the index bits to a given routing table the hit-latency is not affected. We propose three new schemes which result in up to 16% reduction in the number of misses and 13% speedup in memory access time. In comparison, an XOR-based placement scheme known to perform extremely well for general purpose architectures, can obtain up to 2% speedup in memory access time.
Resumo:
To effectively support today’s global economy, database systems need to manage data in multiple languages simultaneously. While current database systems do support the storage and management of multilingual data, they are not capable of querying across different natural languages. To address this lacuna, we have recently proposed two cross-lingual functionalities, LexEQUAL[13] and SemEQUAL[14], for matching multilingual names and concepts, respectively. In this paper, we investigate the native implementation of these multilingual functionalities as first-class operators on relational engines. Specifically, we propose a new multilingual storage datatype, and an associated algebra of the multilingual operators on this datatype. These components have been successfully implemented in the PostgreSQL database system, including integration of the algebra with the query optimizer and inclusion of a metric index in the access layer. Our experiments demonstrate that the performance of the native implementation is up to two orders-of-magnitude faster than the corresponding outsidethe- server implementation. Further, these multilingual additions do not adversely impact the existing functionality and performance. To the best of our knowledge, our prototype represents the first practical implementation of a crosslingual database query engine.
Resumo:
We propose the design and implementation of hardware architecture for spatial prediction based image compression scheme, which consists of prediction phase and quantization phase. In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates an error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. The software model is tested for its performance in terms of entropy, standard deviation. The memory and silicon area constraints play a vital role in the realization of the hardware for hand-held devices. The hardware architecture is constructed for the proposed scheme, which involves the aspects of parallelism in instructions and data. The processor consists of pipelined functional units to obtain the maximum throughput and higher speed of operation. The hardware model is analyzed for performance in terms throughput, speed and power. The results of hardware model indicate that the proposed architecture is suitable for power constrained implementations with higher data rate
Resumo:
A comparative study of strain response and mechanical properties of rammed earth prisms, has been made using Fiber Bragg Grating (FBG) sensors (optical) and clip-on extensometer (electro-mechanical). The aim of this study is to address the merits and demerits of traditional extensometer vis-à-vis FBG sensor; a uni-axial compression test has been performed on a rammed earth prism to validate its structural properties from the stress - strain curves obtained by two different methods of measurement. An array of FBG sensors on a single fiber with varying Bragg wavelengths (..B), has been used to spatially resolve the strains along the height of the specimen. It is interesting to note from the obtained stress-strain curves that the initial tangent modulus obtained using the FBG sensor is lower compared to that obtained using clip-on extensometer. The results also indicate that the strains measured by both FBG and extensometer sensor follow the same trend and both the sensors register the maximum strain value at the same time.
Resumo:
This paper considers the high-rate performance of source coding for noisy discrete symmetric channels with random index assignment (IA). Accurate analytical models are developed to characterize the expected distortion performance of vector quantization (VQ) for a large class of distortion measures. It is shown that when the point density is continuous, the distortion can be approximated as the sum of the source quantization distortion and the channel-error induced distortion. Expressions are also derived for the continuous point density that minimizes the expected distortion. Next, for the case of mean squared error distortion, a more accurate analytical model for the distortion is derived by allowing the point density to have a singular component. The extent of the singularity is also characterized. These results provide analytical models for the expected distortion performance of both conventional VQ as well as for channel-optimized VQ. As a practical example, compression of the linear predictive coding parameters in the wideband speech spectrum is considered, with the log spectral distortion as performance metric. The theory is able to correctly predict the channel error rate that is permissible for operation at a particular level of distortion.
Resumo:
In this paper, we explore the use of LDPC codes for nonuniform sources under distributed source coding paradigm. Our analysis reveals that several capacity approaching LDPC codes indeed do approach the Slepian-Wolf bound for nonuniform sources as well. The Monte Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from Slepian-Wolf bound for moderate block lengths.
Resumo:
Commercial purity (99.8%) magnesium single crystals were subjected to plane strain compression (PSC) along the c-axis at 200 and 370 degrees C and a constant strain rate of 10(-3) s(-1). Extension was confined to the < 1 1 (2) over bar 0 > direction and the specimens were strained up to a logarithmic true strain of -1. The initial rapid increase in flow stress was followed by significant work softening at different stresses and comparable strains of about -0.05 related to macroscopic twinning events. The microstructure of the specimen after PSC at 200 degrees C was characterized by a high density of {1 0 (1) over bar 1} and {1 0 (1) over bar 3} compression twins, some of which were recrystallized. After PSC at 370 degrees C, completely recrystallized twin bands were the major feature of the observed microstructure. All new grains in these bands retained the same c-axis orientation of their compression twin hosts. The basal plane in these grains was randomly rotated around the c-axis, forming a fiber texture component. The obtained results are discussed with respect to the mechanism of recrystallization, the specific character of the boundaries between new grains and the initial matrix, and the importance of the dynamically recrystallized bands for strain accommodation in these deformed magnesium single crystals. (C) 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
The present article demonstrates how the stiffness, hardness as well as the cellular response of bioinert high-density polyethylene (HDPE) can be significantly improved with combined addition of both bioinert and bioactive ceramic fillers. For this purpose, different amounts of hydroxyapatite and alumina, limited to a total of 40 wt %, have been incorporated in HDPE matrix. An important step in composite fabrication was to select appropriate solvent and optimal addition of coupling agent (CA). In case of chemically coupled composites, 2% Titanium IV, 2-propanolato, tris iso-octadecanoato-O was used as a CA. All the hybrid composites, except monolithic HDPE, were fabricated under optimized compression molding condition (140 degrees C, 0.75 h, 10 MPa pressure). The compression molded composites were characterized, using X-ray diffraction, Fourier transformed infrared spectroscopy, and scanning electron microscopy. Importantly, in vitro cell culture and cell viability study (MTT) using L929 fibroblast and SaOS2 osteoblast-like cells confirmed good cytocompatibility properties of the developed hybrid composites. (C) 2011 Wiley Periodicals, Inc. J Appl Polym Sci, 2012
Resumo:
The coding gain in subband coding, a popular technique for achieving signal compression, depends on how the input signal spectrum is decomposed into subbands. The optimality of such decomposition is conventionally addressed by designing appropriate filter banks. The issue of optimal decomposition of the input spectrum is addressed by choosing the set of band that, for a given number of bands, will achieve maximum coding gain. A set of necessary conditions for such optimality is derived, and an algorithm to determine the optimal band edges is then proposed. These band edges along with ideal filters, achieve the upper bound of coding gain for a given number of bands. It is shown that with ideal filters, as well as with realizable filters for some given effective length, such a decomposition system performs better than the conventional nonuniform binary tree-structured decomposition in some cases for AR sources as well as images
Resumo:
Summary form only given. A scheme for code compression that has a fast decompression algorithm, which can be implemented using simple hardware, is proposed. The effectiveness of the scheme on the TMS320C62x architecture that includes the overheads of a line address table (LAT) is evaluated and obtained compression rates ranging from 70% to 80%. Two schemes for decompression are proposed. The basic idea underlying the scheme is a simple clustering algorithm that partially maps a block of instructions into a set of clusters. The clustering algorithm is a greedy algorithm based on the frequency of occurrence of various instructions.
Resumo:
This article presents the deformation behavior of high-strength pearlitic steel deformed by triaxial compression to achieve ultra-fine ferrite grain size with fragmented cementite. The consequent evolution of microstructure and texture has been studied using scanning electron microscopy, electron back-scatter diffraction, and X-ray diffraction. The synergistic effect of diffusion and deformation leads to the uniform dissolution of cementite at higher temperature. At lower temperature, significant grain refinement of ferrite phase occurs by deformation and exhibits a characteristic deformation texture. In contrast, the high-temperature deformed sample shows a weaker texture with cube component for the ferrite phase, indicating the occurrence of recrystallization. The different mechanisms responsible for the refinement of ferrite as well as the fragmentation of cementite and their interaction with each other have been analyzed. Viscoplastic self-consistent simulation was employed to understand deformation texture in the ferrite phase during triaxial compression.