971 resultados para Data Compression


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When the variation of secondary compression, with log(10) t is non-linear, the quantification of secondary settlement through the coefficient of secondary compression, C-alpha epsilon, becomes difficult which frequently leads to an underestimate of the settlement, Log(10) delta - log(10) t representation of such true-compression data has the distinct advantage of exhibiting linear secondary compression behaviour over an appreciably larger time span. The slope of the secondary compression portion of the log(10) e - log(10) t curve expressed as Delta(log e)/(log t) and called the 'secondary compression factor', m, proves to be a better alternative to C-alpha epsilon and the prediction of secondary settlement is improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amount of data contained in electroencephalogram (EEG) recordings is quite massive and this places constraints on bandwidth and storage. The requirement of online transmission of data needs a scheme that allows higher performance with lower computation. Single channel algorithms, when applied on multichannel EEG data fail to meet this requirement. While there have been many methods proposed for multichannel ECG compression, not much work appears to have been done in the area of multichannel EEG. compression. In this paper, we present an EEG compression algorithm based on a multichannel model, which gives higher performance compared to other algorithms. Simulations have been performed on both normal and pathological EEG data and it is observed that a high compression ratio with very large SNR is obtained in both cases. The reconstructed signals are found to match the original signals very closely, thus confirming that diagnostic information is being preserved during transmission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to present exergy charts for carbon dioxide (CO2) based on the new fundamental equation of state and the results of a thermodynamic analysis of conventional and trans-critical vapour compression refrigeration cycles using the data thereof. The calculation scheme is anchored on the Mathematica platform. There exist upper and lower bounds for the high cycle pressure for a given set of evaporating and pre-throttling temperatures. The maximum possible exergetic efficiency for each case was determined. Empirical correlations for exergetic efficiency and COP, valid in the range of temperatures studied here, are obtained. The exergy losses have been quantified. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the design and implementation of hardware architecture for spatial prediction based image compression scheme, which consists of prediction phase and quantization phase. In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates an error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. The software model is tested for its performance in terms of entropy, standard deviation. The memory and silicon area constraints play a vital role in the realization of the hardware for hand-held devices. The hardware architecture is constructed for the proposed scheme, which involves the aspects of parallelism in instructions and data. The processor consists of pipelined functional units to obtain the maximum throughput and higher speed of operation. The hardware model is analyzed for performance in terms throughput, speed and power. The results of hardware model indicate that the proposed architecture is suitable for power constrained implementations with higher data rate

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unreinforced masonry (URM) structures that are in need of repair and rehabilitation constitute a significant portion of building stock worldwide. The successful application of fiber-reinforced polymers (FRP) for repair and retrofitting of reinforced-concrete (RC) structures has opened new avenues for strengthening URM structures with FRP materials. The present study analyzes the behavior of FRP-confined masonry prisms under monotonic axial compression. Masonry comprising of burnt clay bricks and cement-sand mortar (generally adopted in the Indian subcontinent) having E-b/E-m ratio less than one is employed in the study. The parameters considered in the study are, (1) masonry bonding pattern, (2) inclination of loading axis to the bed joint, (3) type of FRP (carbon FRP or glass FRP), and (4) grade of FRP fabric. The performance of FRP-confined masonry prisms is compared with unconfined masonry prisms in terms of compressive strength, modulus of elasticity and stress-strain response. The results showed an enhancement in compressive strength, modulus of elasticity, strain at peak stress, and ultimate strain for FRP-confined masonry prisms. The FRP confinement of masonry resulted in reducing the influence of the inclination of the loading axis to the bed joint on the compressive strength and failure pattern. Various analytical models available in the literature for the prediction of compressive strength of FRP-confined masonry are assessed. New coefficients are generated for the analytical model by appending experimental results of the current study with data available in the literature. (C) 2014 American Society of Civil Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strength at extreme pressures (>1 Mbar or 100 GPa) and high strain rates (106-108 s-1) of materials is not well characterized. The goal of the research outlined in this thesis is to study the strength of tantalum (Ta) at these conditions. The Omega Laser in the Laboratory for Laser Energetics in Rochester, New York is used to create such extreme conditions. Targets are designed with ripples or waves on the surface, and these samples are subjected to high pressures using Omega’s high energy laser beams. In these experiments, the observational parameter is the Richtmyer-Meshkov (RM) instability in the form of ripple growth on single-mode ripples. The experimental platform used for these experiments is the “ride-along” laser compression recovery experiments, which provide a way to recover the specimens having been subjected to high pressures. Six different experiments are performed on the Omega laser using single-mode tantalum targets at different laser energies. The energy indicates the amount of laser energy that impinges the target. For each target, values for growth factor are obtained by comparing the profile of ripples before and after the experiment. With increasing energy, the growth factor increased.

Engineering simulations are used to interpret and correlate the measurements of growth factor to a measure of strength. In order to validate the engineering constitutive model for tantalum, a series of simulations are performed using the code Eureka, based on the Optimal Transportation Meshfree (OTM) method. Two different configurations are studied in the simulations: RM instabilities in single and multimode ripples. Six different simulations are performed for the single ripple configuration of the RM instability experiment, with drives corresponding to laser energies used in the experiments. Each successive simulation is performed at higher drive energy, and it is observed that with increasing energy, the growth factor increases. Overall, there is favorable agreement between the data from the simulations and the experiments. The peak growth factors from the simulations and the experiments are within 10% agreement. For the multimode simulations, the goal is to assist in the design of the laser driven experiments using the Omega laser. A series of three-mode and four-mode patterns are simulated at various energies and the resulting growth of the RM instability is computed. Based on the results of the simulations, a configuration is selected for the multimode experiments. These simulations also serve as validation for the constitutive model and the material parameters for tantalum that are used in the simulations.

By designing samples with initial perturbations in the form of single-mode and multimode ripples and subjecting these samples to high pressures, the Richtmyer-Meshkov instability is investigated in both laser compression experiments and simulations. By correlating the growth of these ripples to measures of strength, a better understanding of the strength of tantalum at high pressures is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultrasound elastography tracks tissue displacements under small levels of compression to obtain images of strain, a mechanical property useful in the detection and characterization of pathology. Due to the nature of ultrasound beamforming, only tissue displacements in the direction of beam propagation, referred to as 'axial', are measured to high quality, although an ability to measure other components of tissue displacement is desired to more fully characterize the mechanical behavior of tissue. Previous studies have used multiple one-dimensional (1D) angled axial displacements tracked from steered ultrasound beams to reconstruct improved quality trans-axial displacements within the scan plane ('lateral'). We show that two-dimensional (2D) displacement tracking is not possible with unmodified electronically-steered ultrasound data, and present a method of reshaping frames of steered ultrasound data to retain axial-lateral orthogonality, which permits 2D displacement tracking. Simulated and experimental ultrasound data are used to compare changes in image quality of lateral displacements reconstructed using 1D and 2D tracked steered axial and steered lateral data. Reconstructed lateral displacement image quality generally improves with the use of 2D displacement tracking at each steering angle, relative to axial tracking alone, particularly at high levels of compression. Due to the influence of tracking noise, unsteered lateral displacements exhibit greater accuracy than axial-based reconstructions at high levels of applied strain. © 2011 SPIE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A direct numerical simulation of the shock/turbulent boundary layer interaction flow in a supersonic 24-degree compression ramp is conducted with the free stream Mach number 2.9. The blow-and-suction disturbance in the upstream wall boundary is used to trigger the transition. Both the mean wall pressure and the velocity profiles agree with those of the experimental data, which validates the simulation. The turbulent kinetic energy budget in the separation region is analyzed. Results show that the turbulent production term increases fast in the separation region, while the turbulent dissipation term reaches its peak in the near-wall region. The turbulent transport term contributes to the balance of the turbulent conduction and turbulent dissipation. Based on the analysis of instantaneous pressure in the downstream region of the mean shock and that in the separation bubble, the authors suggest that the low frequency oscillation of the shock is not caused by the upstream turbulent disturbance, but rather the instability of separation bubble.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for the ability to cluster unknown data to better understand its relationship to know data is prevalent throughout science. Besides a better understanding of the data itself or learning about a new unknown object, cluster analysis can help with processing data, data standardization, and outlier detection. Most clustering algorithms are based on known features or expectations, such as the popular partition based, hierarchical, density-based, grid based, and model based algorithms. The choice of algorithm depends on many factors, including the type of data and the reason for clustering, nearly all rely on some known properties of the data being analyzed. Recently, Li et al. proposed a new universal similarity metric, this metric needs no prior knowledge about the object. Their similarity metric is based on the Kolmogorov Complexity of objects, the objects minimal description. While the Kolmogorov Complexity of an object is not computable, in "Clustering by Compression," Cilibrasi and Vitanyi use common compression algorithms to approximate the universal similarity metric and cluster objects with high success. Unfortunately, clustering using compression does not trivially extend to higher dimensions. Here we outline a method to adapt their procedure to images. We test these techniques on images of letters of the alphabet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wavelets introduce new classes of basis functions for time-frequency signal analysis and have properties particularly suited to the transient components and discontinuities evident in power system disturbances. Wavelet analysis involves representing signals in terms of simpler, fixed building blocks at different scales and positions. This paper examines the analysis and subsequent compression properties of the discrete wavelet and wavelet packet transforms and evaluates both transforms using an actual power system disturbance from a digital fault recorder. The paper presents comparative compression results using the wavelet and discrete cosine transforms and examines the application of wavelet compression in power monitoring to mitigate against data communications overheads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the compression of multispectral images is addressed. Such 3-D data are characterized by a high correlation across the spectral components. The efficiency of the state-of-the-art wavelet-based coder 3-D SPIHT is considered. Although the 3-D SPIHT algorithm provides the obvious way to process a multispectral image as a volumetric block and, consequently, maintain the attractive properties exhibited in 2-D (excellent performance, low complexity, and embeddedness of the bit-stream), its 3-D trees structure is shown to be not adequately suited for 3-D wavelet transformed (DWT) multispectral images. The fact that each parent has eight children in the 3-D structure considerably increases the list of insignificant sets (LIS) and the list of insignificant pixels (LIP) since the partitioning of any set produces eight subsets which will be processed similarly during the sorting pass. Thus, a significant portion from the overall bit-budget is wastedly spent to sort insignificant information. Through an investigation based on results analysis, we demonstrate that a straightforward 2-D SPIHT technique, when suitably adjusted to maintain the rate scalability and carried out in the 3-D DWT domain, overcomes this weakness. In addition, a new SPIHT-based scalable multispectral image compression algorithm is used in the initial iterations to exploit the redundancies within each group of two consecutive spectral bands. Numerical experiments on a number of multispectral images have shown that the proposed scheme provides significant improvements over related works.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wavelet transforms provide basis functions for time-frequency analysis and have properties that are particularly useful for the compression of analogue point on wave transient and disturbance power system signals. This paper evaluates the compression properties of the discrete wavelet transform using actual power system data. The results presented in the paper indicate that reduction ratios up to 10:1 with acceptable distortion are achievable. The paper discusses the application of the reduction method for expedient fault analysis and protection assessment.