900 resultados para Forces de compression
Resumo:
The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.
Resumo:
When the variation of secondary compression, with log(10) t is non-linear, the quantification of secondary settlement through the coefficient of secondary compression, C-alpha epsilon, becomes difficult which frequently leads to an underestimate of the settlement, Log(10) delta - log(10) t representation of such true-compression data has the distinct advantage of exhibiting linear secondary compression behaviour over an appreciably larger time span. The slope of the secondary compression portion of the log(10) e - log(10) t curve expressed as Delta(log e)/(log t) and called the 'secondary compression factor', m, proves to be a better alternative to C-alpha epsilon and the prediction of secondary settlement is improved.
Resumo:
We propose a physical mechanism for the triggering of starbursts in interacting spiral galaxies by shock compression of the pre-existing disk giant molecular clouds (GMCs). We show that as a disk GMC tumbles into the central region of a galaxy following a galactic tidal encounter, it undergoes a radiative shock compression by the pre-existing high pressure of the central molecular intercloud medium. The shocked outer shell of a GMC becomes gravitationally unstable, which results in a burst of star formation in the initially stable GMC. In the case of colliding galaxies with physical overlap such as Arp 244, the cloud compression is shown to occur due to the hot, high-pressure remnant gas resulting from the collisions of atomic hydrogen gas clouds from the two galaxies. The resulting values of infrared luminosity agree with observations. The main mode of triggered star formation is via clusters of stars, thus we can naturally explain the formation of young, luminous star clusters observed in starburst galaxies.
Resumo:
The moisture absorption and changes in compression strengths in glass-epoxy (G-E composites without and with discrete quantities of graphite powders introduced into the resin mix prior to its spreading on specific glass fabric (layers) during the lay-up (stacking) sequence forms the subject matter of this report. The results point to higher moisture absorption for graphite bearing specimens. The strengths of graphite-free coupons show a continuous decrease, while the filler bearing ones show an initial rise followed by a drop for larger exposure times. Scanning Fractographic features were examined for an understanding of the process. The observations were explained invoking the effect of matrix plasticizing and the role of interfacial regions.
Resumo:
Two methods based on wavelet/wavelet packet expansion to denoise and compress optical tomography data containing scattered noise are presented, In the first, the wavelet expansion coefficients of noisy data are shrunk using a soft threshold. In the second, the data are expanded into a wavelet packet tree upon which a best basis search is done. The resulting coefficients are truncated on the basis of energy content. It can be seen that the first method results in efficient denoising of experimental data when scattering particle density in the medium surrounding the object was up to 12.0 x 10(6) per cm(3). This method achieves a compression ratio of approximate to 8:1. The wavelet packet based method resulted in a compression of up to 11:1 and also exhibited reasonable noise reduction capability. Tomographic reconstructions obtained from denoised data are presented. (C) 1999 Published by Elsevier Science B.V. All rights reserved,
Resumo:
The amount of data contained in electroencephalogram (EEG) recordings is quite massive and this places constraints on bandwidth and storage. The requirement of online transmission of data needs a scheme that allows higher performance with lower computation. Single channel algorithms, when applied on multichannel EEG data fail to meet this requirement. While there have been many methods proposed for multichannel ECG compression, not much work appears to have been done in the area of multichannel EEG. compression. In this paper, we present an EEG compression algorithm based on a multichannel model, which gives higher performance compared to other algorithms. Simulations have been performed on both normal and pathological EEG data and it is observed that a high compression ratio with very large SNR is obtained in both cases. The reconstructed signals are found to match the original signals very closely, thus confirming that diagnostic information is being preserved during transmission.
Resumo:
Effects of strain rate (10(-4)-10(-2) s(-1)) on tensile and compressive strength of the Al-Si alloy and Al-Si/graphite composite are investigated. The strain hardening exponent value of the composite was more than that of the alloy for all strain rates during tensile and compressive loading. The yield stress of the composite was more than that of the ultimate tensile strength of the alloy for all strain rates. Tensile and compressive properties of the alloy and composite are dependent on strain rates. The negative strain rate sensitivity was observed for the composite and alloy at lower strain rates during the compression and tension loading respectively. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.
Resumo:
The purpose of this paper is to present exergy charts for carbon dioxide (CO2) based on the new fundamental equation of state and the results of a thermodynamic analysis of conventional and trans-critical vapour compression refrigeration cycles using the data thereof. The calculation scheme is anchored on the Mathematica platform. There exist upper and lower bounds for the high cycle pressure for a given set of evaporating and pre-throttling temperatures. The maximum possible exergetic efficiency for each case was determined. Empirical correlations for exergetic efficiency and COP, valid in the range of temperatures studied here, are obtained. The exergy losses have been quantified. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The problem of guessing a random string is revisited. The relation-ship between guessing without distortion and compression is extended to the case when source alphabet size is countably in¯nite. Further, similar relationship is established for the case when distortion allowed by establishing a tight relationship between rate distortion codes and guessing strategies.
Resumo:
We propose the design and implementation of hardware architecture for spatial prediction based image compression scheme, which consists of prediction phase and quantization phase. In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates an error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. The software model is tested for its performance in terms of entropy, standard deviation. The memory and silicon area constraints play a vital role in the realization of the hardware for hand-held devices. The hardware architecture is constructed for the proposed scheme, which involves the aspects of parallelism in instructions and data. The processor consists of pipelined functional units to obtain the maximum throughput and higher speed of operation. The hardware model is analyzed for performance in terms throughput, speed and power. The results of hardware model indicate that the proposed architecture is suitable for power constrained implementations with higher data rate
Resumo:
A comparative study of strain response and mechanical properties of rammed earth prisms, has been made using Fiber Bragg Grating (FBG) sensors (optical) and clip-on extensometer (electro-mechanical). The aim of this study is to address the merits and demerits of traditional extensometer vis-à-vis FBG sensor; a uni-axial compression test has been performed on a rammed earth prism to validate its structural properties from the stress - strain curves obtained by two different methods of measurement. An array of FBG sensors on a single fiber with varying Bragg wavelengths (..B), has been used to spatially resolve the strains along the height of the specimen. It is interesting to note from the obtained stress-strain curves that the initial tangent modulus obtained using the FBG sensor is lower compared to that obtained using clip-on extensometer. The results also indicate that the strains measured by both FBG and extensometer sensor follow the same trend and both the sensors register the maximum strain value at the same time.