976 resultados para Compression Analysis
Resumo:
First, the compression-awaited data are regarded Lis character strings which are produced by virtual information source mapping M. then the model of the virtual information source M is established by neural network and SVM. Last we construct a lossless data compression (coding) scheme based oil neural network and SVM with the model, an integer function and a SVM discriminant. The scheme differs from the old entropy coding (compressions) inwardly, and it can compress some data compressed by the old entropy coding.
Resumo:
A direct numerical simulation of the shock/turbulent boundary layer interaction flow in a supersonic 24-degree compression ramp is conducted with the free stream Mach number 2.9. The blow-and-suction disturbance in the upstream wall boundary is used to trigger the transition. Both the mean wall pressure and the velocity profiles agree with those of the experimental data, which validates the simulation. The turbulent kinetic energy budget in the separation region is analyzed. Results show that the turbulent production term increases fast in the separation region, while the turbulent dissipation term reaches its peak in the near-wall region. The turbulent transport term contributes to the balance of the turbulent conduction and turbulent dissipation. Based on the analysis of instantaneous pressure in the downstream region of the mean shock and that in the separation bubble, the authors suggest that the low frequency oscillation of the shock is not caused by the upstream turbulent disturbance, but rather the instability of separation bubble.
Resumo:
In this paper, an introduction of wavelet transform and multi-resolution analysis is presented. We describe three data compression methods based on wavelet transform for spectral information,and by using the multi-resolution analysis, we compressed spectral data by Daubechies's compactly supported orthogonal wavelet and orthogonal cubic B-splines wavelet, Using orthogonal cubic B-splines wavelet and coefficients of sharpening signal are set to zero, only very few large coefficients are stored, and a favourable data compression can be achieved.
Resumo:
The need for the ability to cluster unknown data to better understand its relationship to know data is prevalent throughout science. Besides a better understanding of the data itself or learning about a new unknown object, cluster analysis can help with processing data, data standardization, and outlier detection. Most clustering algorithms are based on known features or expectations, such as the popular partition based, hierarchical, density-based, grid based, and model based algorithms. The choice of algorithm depends on many factors, including the type of data and the reason for clustering, nearly all rely on some known properties of the data being analyzed. Recently, Li et al. proposed a new universal similarity metric, this metric needs no prior knowledge about the object. Their similarity metric is based on the Kolmogorov Complexity of objects, the objects minimal description. While the Kolmogorov Complexity of an object is not computable, in "Clustering by Compression," Cilibrasi and Vitanyi use common compression algorithms to approximate the universal similarity metric and cluster objects with high success. Unfortunately, clustering using compression does not trivially extend to higher dimensions. Here we outline a method to adapt their procedure to images. We test these techniques on images of letters of the alphabet.
Resumo:
Aircraft fuselages are complex assemblies of thousands of components and as a result simulation models are highly idealised. In the typical design process, a coarse FE model is used to determine loads within the structure. The size of the model and number of load cases necessitates that only linear static behaviour is considered. This paper reports on the development of a modelling approach to increase the accuracy of the global model, accounting for variations in stiffness due to non-linear structural behaviour. The strategy is based on representing a fuselage sub-section with a single non-linear element. Large portions of fuselage structure are represented by connecting these non-linear elements together to form a framework. The non-linear models are very efficient, reducing computational time significantly
Resumo:
In this paper, the compression of multispectral images is addressed. Such 3-D data are characterized by a high correlation across the spectral components. The efficiency of the state-of-the-art wavelet-based coder 3-D SPIHT is considered. Although the 3-D SPIHT algorithm provides the obvious way to process a multispectral image as a volumetric block and, consequently, maintain the attractive properties exhibited in 2-D (excellent performance, low complexity, and embeddedness of the bit-stream), its 3-D trees structure is shown to be not adequately suited for 3-D wavelet transformed (DWT) multispectral images. The fact that each parent has eight children in the 3-D structure considerably increases the list of insignificant sets (LIS) and the list of insignificant pixels (LIP) since the partitioning of any set produces eight subsets which will be processed similarly during the sorting pass. Thus, a significant portion from the overall bit-budget is wastedly spent to sort insignificant information. Through an investigation based on results analysis, we demonstrate that a straightforward 2-D SPIHT technique, when suitably adjusted to maintain the rate scalability and carried out in the 3-D DWT domain, overcomes this weakness. In addition, a new SPIHT-based scalable multispectral image compression algorithm is used in the initial iterations to exploit the redundancies within each group of two consecutive spectral bands. Numerical experiments on a number of multispectral images have shown that the proposed scheme provides significant improvements over related works.
Resumo:
In this paper we study the classification of spatiotemporal pattern of one-dimensional cellular automata (CA) whereas the classification comprises CA rules including their initial conditions. We propose an exploratory analysis method based on the normalized compression distance (NCD) of spatiotemporal patterns which is used as dissimilarity measure for a hierarchical clustering. Our approach is different with respect to the following points. First, the classification of spatiotemporal pattern is comparative because the NCD evaluates explicitly the difference of compressibility among two objects, e.g., strings corresponding to spatiotemporal patterns. This is in contrast to all other measures applied so far in a similar context because they are essentially univariate. Second, Kolmogorov complexity, which underlies the NCD, was used in the classification of CA with respect to their spatiotemporal pattern. Third, our method is semiautomatic allowing us to investigate hundreds or thousands of CA rules or initial conditions simultaneously to gain insights into their organizational structure. Our numerical results are not only plausible confirming previous classification attempts but also shed light on the intricate influence of random initial conditions on the classification results.
Resumo:
This study reports the use of texture profile analysis (TPA) to mechanically characterize polymeric, pharmaceutical semisolids containing at least one bioadhesive polymer and to determine interactions between formulation components. The hardness, adhesiveness, force per unit time required for compression (compressibility), and elasticity of polymeric, pharmaceutical semisolids containing polycarbophil (1 or 5% w/w), polyvinylpyrrolidone (3 or 5% w/w), and hydroxyethylcellulose (3, 5, or 10% w/w) in phosphate buffer (pH 6.8) were determined using a texture analyzer in the TPA mode (compression depth 15 mm, compression rate 8 mm s(-1) 15 s delay period). Increasing concentrations of polycarbophil, poly vinylpyrrolidone, and hydroxyethylcellulose significantly increased product hardness, adhesiveness, and compressibility but decreased product elasticity. Statistically, interactions between polymeric formulation components were observed within the experimental design and were probably due to relative differences in the physical states of polyvinylpyrrolidone and polycarbophil in the formulations, i.e., dispersed/dissolved and unswollen/swollen, respectively. Increased product hardness and compressibility were possibly due to the effects of hydroxyethylcellulose, polyvinylpyrrolidone, and polycarbophil on the viscosity of the formulations. Increased adhesiveness was related to the concentration and, more importantly, to the physical state of polycarbophil. Decreased product elasticity was due to the increased semisolid nature of the product. TPA is a rapid, straightforward analytical technique that may be applied to the mechanical characterization of polymeric, pharmaceutical semisolids. It provides a convenient means to rapidly identify physicochemical interactions between formulation components. (C) 1996 John Wiley & Sons, Inc.
Resumo:
Damage tolerant hat-stiffened thin-skinned composite panels with and without a centrally located circular cutout, under uniaxial compression loading, were investigated experimentally and analytically. These panels incorporated a highly postbuckling design characterised by two integral stiffeners separated by a large skin bay with a high width to skin-thickness ratio. In both configurations, the skin initially buckled into three half-wavelengths and underwent two mode-shape changes; the first a gradual mode change characterised by a central deformation with double curvature and the second a dynamic snap to five half-wavelengths. The use of standard path-following non-linear finite element analysis did not consistently capture the dynamic mode change and an approximate solution for the prediction of mode-changes using a Marguerre-type Rayleigh-Ritz energy method is presented. Shortcomings with both methods of analysis are discussed and improvements suggested. The panels failed catastrophically and their strength was limited by the local buckling strength of the hat stiffeners. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent efforts towards the development of the next generation of large civil and military transport aircraft within the European community have provided new impetus for investigating the potential use of composite material in the primary structure. One concern in this development is the vulnerability of co-cured stiffened structures to through-thickness stresses at the skin-stiffener interfaces particularly in stiffener runout regions. These regions are an inevitable consequence of the requirement to terminate stiffeners at cutouts, rib intersections or other structural features which interrupt the stiffener load path. In this respect, thickerskinned components are more vulnerable than thin-skinned ones. This work presents an experimental and numerical study of the failure of thick-sectioned stiffener runout specimens loaded in uniaxial compression. The experiments revealed that failure was initiated at the edge of the runout and propagated across the skin-stiffener interface. High frictional forces at the edge of the runout were also deduced from a fractographic analysis and it is postulated that these forces may enhance the fracture toughness of the specimens. Finite element analysis using an efficient thick-shell element and the Virtual Crack Closure Technique was able to qualitatively predict the crack growth characteristics for each specimen
Resumo:
The postbuckling behaviour of a panel with blade-stiffeners incorporating tapered flanges was experimentally investigated. A new failure mechanism was identified for this particular type of stiffener. Failure was initiated by mid-plane delamination at the free edge of the postbuckled stiffener web at a node-line. This was consistent with an interlaminar shear stress failure and was calculated from strain gauge measurements using an approximate analysis based on lamination theory and incorporating edge effects. The critical shear stress was found to agree well with the shear strength obtained from a three-point bending test of the web laminate.
Resumo:
A combined experimental and analytical study of a hat-stiffened carbon-fibre composite panel loaded in uniaxial compression was investigated. A buckling mode transition was observed in the panel's skin bay which was not captured using non-linear finite-element analysis. Good correlation between experimental and numerical strain and displacement results was achieved in the prebuckling and initial postbuckling region of the loading history. A Marguerre-type Rayleigh-Ritz energy method was applied to the skin bay using representative displacement functions of permissible mode shapes to explain the mode transition phenomenon. The central criterion of this method was based on the assumption that a change in mode shape occurred such that the total potential energy of the structure was maintained at a minimum. The ultimate strength of the panel was limited by the column buckling strength of the hat-stiffeners.
Resumo:
In many applications in applied statistics researchers reduce the complexity of a data set by combining a group of variables into a single measure using factor analysis or an index number. We argue that such compression loses information if the data actually has high dimensionality. We advocate the use of a non-parametric estimator, commonly used in physics (the Takens estimator), to estimate the correlation dimension of the data prior to compression. The advantage of this approach over traditional linear data compression approaches is that the data does not have to be linearized. Applying our ideas to the United Nations Human Development Index we find that the four variables that are used in its construction have dimension three and the index loses information.
Resumo:
Recently, two fast selective encryption methods for context-adaptive variable length coding and context-adaptive binary arithmetic coding in H.264/AVC were proposed by Shahid et al. In this paper, it was demonstrated that these two methods are not as efficient as only encrypting the sign bits of nonzero coefficients. Experimental results showed that without encrypting the sign bits of nonzero coefficients, these two methods can not provide a perceptual scrambling effect. If a much stronger scrambling effect is required, intra prediction modes, and the sign bits of motion vectors can be encrypted together with the sign bits of nonzero coefficients. For practical applications, the required encryption scheme should be customized according to a user's specified requirement on the perceptual scrambling effect and the computational cost. Thus, a tunable encryption scheme combining these three methods is proposed for H.264/AVC. To simplify its implementation and reduce the computational cost, a simple control mechanism is proposed to adjust the control factors. Experimental results show that this scheme can provide different scrambling levels by adjusting three control factors with no or very little impact on the compression performance. The proposed scheme can run in real-time and its computational cost is minimal. The security of the proposed scheme is also discussed. It is secure against the replacement attack when all three control factors are set to one.