92 resultados para Tension and compression
Resumo:
The surface tensions of binary mixtures of 1-alkanols (Cl-Cd with benzene, toluene, or xylene were measured. The results were correlated with the activity coefficients calculated through the group contribution method such as UNIFAC, with the maximum deviation from the experimental results less that 5%. The coefficients of the correlation are correlated with the chain length.
Resumo:
Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.
Resumo:
The EEG time series has been subjected to various formalisms of analysis to extract meaningful information regarding the underlying neural events. In this paper the linear prediction (LP) method has been used for analysis and presentation of spectral array data for the better visualisation of background EEG activity. It has also been used for signal generation, efficient data storage and transmission of EEG. The LP method is compared with the standard Fourier method of compressed spectral array (CSA) of the multichannel EEG data. The autocorrelation autoregressive (AR) technique is used for obtaining the LP coefficients with a model order of 15. While the Fourier method reduces the data only by half, the LP method just requires the storage of signal variance and LP coefficients. The signal generated using white Gaussian noise as the input to the LP filter has a high correlation coefficient of 0.97 with that of original signal, thus making LP as a useful tool for storage and transmission of EEG. The biological significance of Fourier method and the LP method in respect to the microstructure of neuronal events in the generation of EEG is discussed.
Resumo:
Uniaxial compression tests were conducted on Ti-6Al-4V specimens in the strain-rate range df 0.001 to 1 s(-1) and temperature range of 298 to 673 K. The stress-strain curves exhibited a peak flow stress followed by flow softening. Up to 523 K, the specimens cracked catastrophically after the flow softening started. Adiabatic shear banding was observed in this regime. The fracture surface exhibited both mode I and II fracture features. The state of stress existing in a compression test specimen when bulging occurs is responsible for this fracture. The instabilities observed in the present tests are classified as ''geometric'' in nature and are state-of-stress dependant, unlike the ''intrinsic'' instabilities, which are dependant on the dynamic constitutive behavior of the material.
Resumo:
A novel approach for lossless as well as lossy compression of monochrome images using Boolean minimization is proposed. The image is split into bit planes. Each bit plane is divided into windows or blocks of variable size. Each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube based two level function minimizer. The minimized cubes are encoded using a code set which satisfies the prefix property. Our technique of lossless compression involves linear prediction as a preprocessing step and has compression ratio comparable to that of JPEG lossless compression technique. Our lossy compression technique involves reducing the number of bit planes as a preprocessing step which incurs minimal loss in the information of the image. The bit planes that remain after preprocessing are compressed using our lossless compression technique based on Boolean minimization. Qualitatively one cannot visually distinguish between the original image and the lossy image and the value of mean square error is kept low. For mean square error value close to that of JPEG lossy compression technique, our method gives better compression ratio. The compression scheme is relatively slower while the decompression time is comparable to that of JPEG.
Resumo:
Surface melting by a stationary, pulsed laser has been modelled by the finite element method. The role of the surface tension driven convection is investigated in detail. Numerical results are presented for a triangular laser pulse of durations 10, 50 and 200 ms. Though the magnitude of the velocity is high due to the surface tension forces, the present results indicate that a finite time is required for convection to affect the temperature distribution within the melt pool. The effect of convection is very significant for pulse durations longer than 10 ms.
Resumo:
A new postcracking formulation for concrete, along with both implicit and explicit layering procedures, is used in the analysis of reinforced-concrete (RC) flexural and torsional elements. The postcracking formulation accounts for tension stiffening in concrete along the rebar directions, compression softening in cracked concrete based on either stresses or strains, and aggregate interlock based on crack-confining normal stresses. Transverse shear stresses computed using the layering procedures are included in material model considerations that permit the development of inclined cracks through the RC cross section. Examples of a beam analyzed by both the layering techniques, a torsional element, and a column-slab connection region analyzed by the implicit layering procedure are presented here. The study highlights the primary advantages and disadvantages of each layering approach, identifying the class of problems where the application of either procedure is more suitable.
Resumo:
Two methods based on wavelet/wavelet packet expansion to denoise and compress optical tomography data containing scattered noise are presented, In the first, the wavelet expansion coefficients of noisy data are shrunk using a soft threshold. In the second, the data are expanded into a wavelet packet tree upon which a best basis search is done. The resulting coefficients are truncated on the basis of energy content. It can be seen that the first method results in efficient denoising of experimental data when scattering particle density in the medium surrounding the object was up to 12.0 x 10(6) per cm(3). This method achieves a compression ratio of approximate to 8:1. The wavelet packet based method resulted in a compression of up to 11:1 and also exhibited reasonable noise reduction capability. Tomographic reconstructions obtained from denoised data are presented. (C) 1999 Published by Elsevier Science B.V. All rights reserved,
Resumo:
Commercial purity (99.8%) magnesium single crystals were subjected to plane strain compression (PSC) along the c-axis at 200 and 370 degrees C and a constant strain rate of 10(-3) s(-1). Extension was confined to the < 1 1 (2) over bar 0 > direction and the specimens were strained up to a logarithmic true strain of -1. The initial rapid increase in flow stress was followed by significant work softening at different stresses and comparable strains of about -0.05 related to macroscopic twinning events. The microstructure of the specimen after PSC at 200 degrees C was characterized by a high density of {1 0 (1) over bar 1} and {1 0 (1) over bar 3} compression twins, some of which were recrystallized. After PSC at 370 degrees C, completely recrystallized twin bands were the major feature of the observed microstructure. All new grains in these bands retained the same c-axis orientation of their compression twin hosts. The basal plane in these grains was randomly rotated around the c-axis, forming a fiber texture component. The obtained results are discussed with respect to the mechanism of recrystallization, the specific character of the boundaries between new grains and the initial matrix, and the importance of the dynamically recrystallized bands for strain accommodation in these deformed magnesium single crystals. (C) 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
The present article demonstrates how the stiffness, hardness as well as the cellular response of bioinert high-density polyethylene (HDPE) can be significantly improved with combined addition of both bioinert and bioactive ceramic fillers. For this purpose, different amounts of hydroxyapatite and alumina, limited to a total of 40 wt %, have been incorporated in HDPE matrix. An important step in composite fabrication was to select appropriate solvent and optimal addition of coupling agent (CA). In case of chemically coupled composites, 2% Titanium IV, 2-propanolato, tris iso-octadecanoato-O was used as a CA. All the hybrid composites, except monolithic HDPE, were fabricated under optimized compression molding condition (140 degrees C, 0.75 h, 10 MPa pressure). The compression molded composites were characterized, using X-ray diffraction, Fourier transformed infrared spectroscopy, and scanning electron microscopy. Importantly, in vitro cell culture and cell viability study (MTT) using L929 fibroblast and SaOS2 osteoblast-like cells confirmed good cytocompatibility properties of the developed hybrid composites. (C) 2011 Wiley Periodicals, Inc. J Appl Polym Sci, 2012
Resumo:
Summary form only given. A scheme for code compression that has a fast decompression algorithm, which can be implemented using simple hardware, is proposed. The effectiveness of the scheme on the TMS320C62x architecture that includes the overheads of a line address table (LAT) is evaluated and obtained compression rates ranging from 70% to 80%. Two schemes for decompression are proposed. The basic idea underlying the scheme is a simple clustering algorithm that partially maps a block of instructions into a set of clusters. The clustering algorithm is a greedy algorithm based on the frequency of occurrence of various instructions.
Resumo:
This article presents the deformation behavior of high-strength pearlitic steel deformed by triaxial compression to achieve ultra-fine ferrite grain size with fragmented cementite. The consequent evolution of microstructure and texture has been studied using scanning electron microscopy, electron back-scatter diffraction, and X-ray diffraction. The synergistic effect of diffusion and deformation leads to the uniform dissolution of cementite at higher temperature. At lower temperature, significant grain refinement of ferrite phase occurs by deformation and exhibits a characteristic deformation texture. In contrast, the high-temperature deformed sample shows a weaker texture with cube component for the ferrite phase, indicating the occurrence of recrystallization. The different mechanisms responsible for the refinement of ferrite as well as the fragmentation of cementite and their interaction with each other have been analyzed. Viscoplastic self-consistent simulation was employed to understand deformation texture in the ferrite phase during triaxial compression.