977 resultados para Data Deduplication Compression
Resumo:
The use of binary fluid systems in thermally driven vapour absorption and mechanically driven vapour compression refrigeration and heatpump cycles has provided an impetus for obtaining experimental date on caloric properties of such fluid mixtures. However, direct measurements of these properties are somewhat scarce in spite of the calorimetric techniques described in the literature being quite adequate. Most of the design data are derived through calculations using theoretical models and vapour-liquid equilibrium data. This article addresses the choice of working fluids and the current status on the data availability vis-a-vis engineering applications. Particular emphasis is on organic working fluid pairs.
Resumo:
We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.
Resumo:
Processing maps for hot working of stainless steel of type AISI 304L have been developed on the basis of the flow stress data generated by compression and torsion in the temperature range 600–1200 °C and strain rate range 0.1–100 s−1. The efficiency of power dissipation given by 2m/(m+1) where m is the strain rate sensitivity is plotted as a function of temperature and strain rate to obtain a processing map, which is interpreted on the basis of the Dynamic Materials Model. The maps obtained by compression as well as torsion exhibited a domain of dynamic recrystallization with its peak efficiency occurring at 1200 °C and 0.1 s−1. These are the optimum hot-working parameters which may be obtained by either of the test techniques. The peak efficiency for the dynamic recrystallization is apparently higher (64%) than that obtained in constant-true-strain-rate compression (41%) and the difference in explained on the basis of strain rate variations occurring across the section of solid torsion bar. A region of flow instability has occurred at lower temperatures (below 1000 °C) and higher strain rates (above 1 s−1) and is wider in torsion than in compression. To achieve complete microstructure control in a component, the state of stress will have to be considered.
Resumo:
Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.
Resumo:
The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.
Resumo:
When the variation of secondary compression, with log(10) t is non-linear, the quantification of secondary settlement through the coefficient of secondary compression, C-alpha epsilon, becomes difficult which frequently leads to an underestimate of the settlement, Log(10) delta - log(10) t representation of such true-compression data has the distinct advantage of exhibiting linear secondary compression behaviour over an appreciably larger time span. The slope of the secondary compression portion of the log(10) e - log(10) t curve expressed as Delta(log e)/(log t) and called the 'secondary compression factor', m, proves to be a better alternative to C-alpha epsilon and the prediction of secondary settlement is improved.
Resumo:
The amount of data contained in electroencephalogram (EEG) recordings is quite massive and this places constraints on bandwidth and storage. The requirement of online transmission of data needs a scheme that allows higher performance with lower computation. Single channel algorithms, when applied on multichannel EEG data fail to meet this requirement. While there have been many methods proposed for multichannel ECG compression, not much work appears to have been done in the area of multichannel EEG. compression. In this paper, we present an EEG compression algorithm based on a multichannel model, which gives higher performance compared to other algorithms. Simulations have been performed on both normal and pathological EEG data and it is observed that a high compression ratio with very large SNR is obtained in both cases. The reconstructed signals are found to match the original signals very closely, thus confirming that diagnostic information is being preserved during transmission.
Resumo:
We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.
Resumo:
The purpose of this paper is to present exergy charts for carbon dioxide (CO2) based on the new fundamental equation of state and the results of a thermodynamic analysis of conventional and trans-critical vapour compression refrigeration cycles using the data thereof. The calculation scheme is anchored on the Mathematica platform. There exist upper and lower bounds for the high cycle pressure for a given set of evaporating and pre-throttling temperatures. The maximum possible exergetic efficiency for each case was determined. Empirical correlations for exergetic efficiency and COP, valid in the range of temperatures studied here, are obtained. The exergy losses have been quantified. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
We propose the design and implementation of hardware architecture for spatial prediction based image compression scheme, which consists of prediction phase and quantization phase. In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates an error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. The software model is tested for its performance in terms of entropy, standard deviation. The memory and silicon area constraints play a vital role in the realization of the hardware for hand-held devices. The hardware architecture is constructed for the proposed scheme, which involves the aspects of parallelism in instructions and data. The processor consists of pipelined functional units to obtain the maximum throughput and higher speed of operation. The hardware model is analyzed for performance in terms throughput, speed and power. The results of hardware model indicate that the proposed architecture is suitable for power constrained implementations with higher data rate
Resumo:
This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.
Resumo:
In this paper, we explore the use of LDPC codes for nonuniform sources under distributed source coding paradigm. Our analysis reveals that several capacity approaching LDPC codes indeed do approach the Slepian-Wolf bound for nonuniform sources as well. The Monte Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from Slepian-Wolf bound for moderate block lengths.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
Summary form only given. A scheme for code compression that has a fast decompression algorithm, which can be implemented using simple hardware, is proposed. The effectiveness of the scheme on the TMS320C62x architecture that includes the overheads of a line address table (LAT) is evaluated and obtained compression rates ranging from 70% to 80%. Two schemes for decompression are proposed. The basic idea underlying the scheme is a simple clustering algorithm that partially maps a block of instructions into a set of clusters. The clustering algorithm is a greedy algorithm based on the frequency of occurrence of various instructions.
Resumo:
Unreinforced masonry (URM) structures that are in need of repair and rehabilitation constitute a significant portion of building stock worldwide. The successful application of fiber-reinforced polymers (FRP) for repair and retrofitting of reinforced-concrete (RC) structures has opened new avenues for strengthening URM structures with FRP materials. The present study analyzes the behavior of FRP-confined masonry prisms under monotonic axial compression. Masonry comprising of burnt clay bricks and cement-sand mortar (generally adopted in the Indian subcontinent) having E-b/E-m ratio less than one is employed in the study. The parameters considered in the study are, (1) masonry bonding pattern, (2) inclination of loading axis to the bed joint, (3) type of FRP (carbon FRP or glass FRP), and (4) grade of FRP fabric. The performance of FRP-confined masonry prisms is compared with unconfined masonry prisms in terms of compressive strength, modulus of elasticity and stress-strain response. The results showed an enhancement in compressive strength, modulus of elasticity, strain at peak stress, and ultimate strain for FRP-confined masonry prisms. The FRP confinement of masonry resulted in reducing the influence of the inclination of the loading axis to the bed joint on the compressive strength and failure pattern. Various analytical models available in the literature for the prediction of compressive strength of FRP-confined masonry are assessed. New coefficients are generated for the analytical model by appending experimental results of the current study with data available in the literature. (C) 2014 American Society of Civil Engineers.