28 resultados para Data compression


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classification of large datasets is a challenging task in Data Mining. In the current work, we propose a novel method that compresses the data and classifies the test data directly in its compressed form. The work forms a hybrid learning approach integrating the activities of data abstraction, frequent item generation, compression, classification and use of rough sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classification of large datasets is a challenging task in Data Mining. In the current work, we propose a novel method that compresses the data and classifies the test data directly in its compressed form. The work forms a hybrid learning approach integrating the activities of data abstraction, frequent item generation, compression, classification and use of rough sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of binary fluid systems in thermally driven vapour absorption and mechanically driven vapour compression refrigeration and heatpump cycles has provided an impetus for obtaining experimental date on caloric properties of such fluid mixtures. However, direct measurements of these properties are somewhat scarce in spite of the calorimetric techniques described in the literature being quite adequate. Most of the design data are derived through calculations using theoretical models and vapour-liquid equilibrium data. This article addresses the choice of working fluids and the current status on the data availability vis-a-vis engineering applications. Particular emphasis is on organic working fluid pairs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Processing maps for hot working of stainless steel of type AISI 304L have been developed on the basis of the flow stress data generated by compression and torsion in the temperature range 600–1200 °C and strain rate range 0.1–100 s−1. The efficiency of power dissipation given by 2m/(m+1) where m is the strain rate sensitivity is plotted as a function of temperature and strain rate to obtain a processing map, which is interpreted on the basis of the Dynamic Materials Model. The maps obtained by compression as well as torsion exhibited a domain of dynamic recrystallization with its peak efficiency occurring at 1200 °C and 0.1 s−1. These are the optimum hot-working parameters which may be obtained by either of the test techniques. The peak efficiency for the dynamic recrystallization is apparently higher (64%) than that obtained in constant-true-strain-rate compression (41%) and the difference in explained on the basis of strain rate variations occurring across the section of solid torsion bar. A region of flow instability has occurred at lower temperatures (below 1000 °C) and higher strain rates (above 1 s−1) and is wider in torsion than in compression. To achieve complete microstructure control in a component, the state of stress will have to be considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When the variation of secondary compression, with log(10) t is non-linear, the quantification of secondary settlement through the coefficient of secondary compression, C-alpha epsilon, becomes difficult which frequently leads to an underestimate of the settlement, Log(10) delta - log(10) t representation of such true-compression data has the distinct advantage of exhibiting linear secondary compression behaviour over an appreciably larger time span. The slope of the secondary compression portion of the log(10) e - log(10) t curve expressed as Delta(log e)/(log t) and called the 'secondary compression factor', m, proves to be a better alternative to C-alpha epsilon and the prediction of secondary settlement is improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amount of data contained in electroencephalogram (EEG) recordings is quite massive and this places constraints on bandwidth and storage. The requirement of online transmission of data needs a scheme that allows higher performance with lower computation. Single channel algorithms, when applied on multichannel EEG data fail to meet this requirement. While there have been many methods proposed for multichannel ECG compression, not much work appears to have been done in the area of multichannel EEG. compression. In this paper, we present an EEG compression algorithm based on a multichannel model, which gives higher performance compared to other algorithms. Simulations have been performed on both normal and pathological EEG data and it is observed that a high compression ratio with very large SNR is obtained in both cases. The reconstructed signals are found to match the original signals very closely, thus confirming that diagnostic information is being preserved during transmission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to present exergy charts for carbon dioxide (CO2) based on the new fundamental equation of state and the results of a thermodynamic analysis of conventional and trans-critical vapour compression refrigeration cycles using the data thereof. The calculation scheme is anchored on the Mathematica platform. There exist upper and lower bounds for the high cycle pressure for a given set of evaporating and pre-throttling temperatures. The maximum possible exergetic efficiency for each case was determined. Empirical correlations for exergetic efficiency and COP, valid in the range of temperatures studied here, are obtained. The exergy losses have been quantified. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the design and implementation of hardware architecture for spatial prediction based image compression scheme, which consists of prediction phase and quantization phase. In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates an error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. The software model is tested for its performance in terms of entropy, standard deviation. The memory and silicon area constraints play a vital role in the realization of the hardware for hand-held devices. The hardware architecture is constructed for the proposed scheme, which involves the aspects of parallelism in instructions and data. The processor consists of pipelined functional units to obtain the maximum throughput and higher speed of operation. The hardware model is analyzed for performance in terms throughput, speed and power. The results of hardware model indicate that the proposed architecture is suitable for power constrained implementations with higher data rate

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unreinforced masonry (URM) structures that are in need of repair and rehabilitation constitute a significant portion of building stock worldwide. The successful application of fiber-reinforced polymers (FRP) for repair and retrofitting of reinforced-concrete (RC) structures has opened new avenues for strengthening URM structures with FRP materials. The present study analyzes the behavior of FRP-confined masonry prisms under monotonic axial compression. Masonry comprising of burnt clay bricks and cement-sand mortar (generally adopted in the Indian subcontinent) having E-b/E-m ratio less than one is employed in the study. The parameters considered in the study are, (1) masonry bonding pattern, (2) inclination of loading axis to the bed joint, (3) type of FRP (carbon FRP or glass FRP), and (4) grade of FRP fabric. The performance of FRP-confined masonry prisms is compared with unconfined masonry prisms in terms of compressive strength, modulus of elasticity and stress-strain response. The results showed an enhancement in compressive strength, modulus of elasticity, strain at peak stress, and ultimate strain for FRP-confined masonry prisms. The FRP confinement of masonry resulted in reducing the influence of the inclination of the loading axis to the bed joint on the compressive strength and failure pattern. Various analytical models available in the literature for the prediction of compressive strength of FRP-confined masonry are assessed. New coefficients are generated for the analytical model by appending experimental results of the current study with data available in the literature. (C) 2014 American Society of Civil Engineers.