899 resultados para compression bandages
Resumo:
The paper discusses the application of a similarity metric based on compression to the measurement of the distance among Bulgarian dia- lects. The similarity metric is de ned on the basis of the notion of Kolmo- gorov complexity of a le (or binary string). The application of Kolmogorov complexity in practice is not possible because its calculation over a le is an undecidable problem. Thus, the actual similarity metric is based on a real life compressor which only approximates the Kolmogorov complexity. To use the metric for distance measurement of Bulgarian dialects we rst represent the dialectological data in such a way that the metric is applicable. We propose two such representations which are compared to a baseline distance between dialects. Then we conclude the paper with an outline of our future work.
Resumo:
This report examines important issues pertaining to the different ways of affecting the information security of file objects under information attacks through methods of compression. Accordingly, the report analyzes the three-way relationships which may exist among a selected set of attacks, methods and objects. Thus, a methodology is proposed for evaluation of information security, and a coefficient of information security is created. With respects to this coefficient, using different criteria and methods for evaluation and selection of alternatives, the lowest-risk methods of compression are selected.
Resumo:
This paper introduces an encoding of knowledge representation statements as regular languages and proposes a two-phase approach to processing of explicitly declared conceptual information. The idea is presented for the simple conceptual graphs where conceptual pattern search is implemented by the so called projection operation. Projection calculations are organised into off-line preprocessing and run-time computations. This enables fast run-time treatment of NP-complete problems, given that the intermediate results of the off-line phase are kept in suitable data structures. The experiments with randomly-generated, middle-size knowledge bases support the claim that the suggested approach radically improves the run-time conceptual pattern search.
Resumo:
We demonstrate light pulse combining and pulse compression using a continuous-discrete nonlinear system implemented in a multi-core fiber (MCF). It is shown that the pulses initially injected into all of the cores of a ring MCF are combined by nonlinearity into a small number of cores with simultaneous pulse compression. We demonstrate the combining of 77% of the energy into one core with pulse compression over 14× in a 20-core MCF. We also demonstrate that a suggested scheme is insensitive to the phase perturbations. Nonlinear spatio-temporal pulse manipulation in multi-core fibers can be exploited for various applications, including pulse compression, switching, and combining.
Resumo:
A recently introduced inference method based on system replication and an online message passing algorithm is employed to complete a previously suggested compression scheme based on a nonlinear perceptron. The algorithm is shown to approach the information theoretical bounds for compression as the number of replicated systems increases, offering superior performance compared to basic message passing algorithms. In addition, the suggested method does not require fine-tuning of parameters or other complementing heuristic techniques, such as the introduction of inertia terms, to improve convergence rates to nontrivial results. © 2014 American Physical Society.
Resumo:
The classification of types of information redundancy in symbolic and graphical forms representation of information is done. The general classification of compression technologies for graphical information is presented as well. The principles of design, tasks and variants for realizations of semantic compression technology of graphical information are suggested.
Resumo:
This paper presents a novel error-free (infinite-precision) architecture for the fast implementation of 8x8 2-D Discrete Cosine Transform. The architecture uses a new algebraic integer encoding of a 1-D radix-8 DCT that allows the separable computation of a 2-D 8x8 DCT without any intermediate number representation conversions. This is a considerable improvement on previously introduced algebraic integer encoding techniques to compute both DCT and IDCT which eliminates the requirements to approximate the transformation matrix ele- ments by obtaining their exact representations and hence mapping the transcendental functions without any errors. Apart from the multiplication-free nature, this new mapping scheme fits to this algorithm, eliminating any computational or quantization errors and resulting short-word-length and high-speed-design.
Resumo:
A multicore fibre (MCF) sensor to measure the radial deformation of a compliant cylinder under compression is presented. The sensor is connectorised and need not be permanently bonded to the test object. A differential measurement technique using FBGs written into the MCF makes the sensor temperature insensitive. FBG measurement of axial strain of a cylinder under compression is also reported.
Anisotropic characterization of crack growth in the tertiary flow of asphalt mixtures in compression
Resumo:
Asphalt mixtures exhibit primary, secondary, and tertiary stages in sequence during a rutting deterioration. Many field asphalt pavements are still in service even when the asphalt layer is in the tertiary stage, and rehabilitation is not performed until a significant amount of rutting accompanied by numerous macrocracks is observed. The objective of this study was to provide a mechanistic method to model the anisotropic cracking of the asphalt mixtures in compression during the tertiary stage of rutting. Laboratory tests including nondestructive and destructive tests were performed to obtain the viscoelastic and viscofracture properties of the asphalt mixtures. Each of the measured axial and radial total strains in the destructive tests were decomposed into elastic, plastic, viscoelastic, viscoplastic, and viscofracture strains using the pseudostrain method in an extended elastic-viscoelastic correspondence principle. The viscofracture strains are caused by the crack growth, which is primarily signaled by the increase of phase angle in the tertiary flow. The viscofracture properties are characterized using the anisotropic damage densities (i.e., the ratio of the lost area caused by cracks to the original total area in orthogonal directions). Using the decomposed axial and radial viscofracture strains, the axial and radial damage densities were determined by using a dissipated pseudostrain energy balance principle and a geometric analysis of the cracks, respectively. Anisotropic pseudo J-integral Paris' laws in terms of damage densities were used to characterize the evolution of the cracks in compression. The material constants in the Paris' law are determined and found to be highly correlated. These tests, analysis, and modeling were performed on different asphalt mixtures with two binders, two air void contents, and three aging periods. Consistent results were obtained; for instance, a stiffer asphalt mixture is demonstrated to have a higher modulus, a lower phase angle, a greater flow number, and a larger n1 value (exponent of Paris' law). The calculation of the orientation of cracks demonstrates that the asphalt mixture with 4% air voids has a brittle fracture and a splitting crack mode, whereas the asphalt mixture with 7% air voids tends to have a ductile fracture and a diagonal sliding crack mode. Cracks of the asphalt mixtures in compression are inclined to propagate along the direction of the external compressive load. © 2014 American Society of Civil Engineers.
Resumo:
Nous étudions numériquement le phénomène de compression spectrale se déroulant dans une fibre optique à dispersion normale. Les conditions conduisant à une impulsion en quasi-limite de Fourier sont déterminées et nous montrons que loin de dégrader les performances, la présence de dispersion normale permet une amélioration significative des résultats.
Resumo:
We present comprehensive design rules to optimize the process of spectral compression arising from nonlinear pulse propagation in an optical fiber. Extensive numerical simulations are used to predict the performance characteristics of the process as well as to identify the optimal operational conditions within the space of system parameters. It is shown that the group velocity dispersion of the fiber is not detrimental and, in fact, helps achieve optimum compression. We also demonstrate that near-transform-limited rectangular and parabolic pulses can be generated in the region of optimum compression.
Resumo:
En exploitant une modulation de phase sinusoïdale temporelle additionnelle, nous montrons qu’il est possible d’améliorer significativement les performances d’une compression spectrale réalisée en régime de propagation hautement non-linéaire. Les simulations numériques indiquent ainsi une amélioration des facteurs de compression ainsi que du rapport de Strehl.
Resumo:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^
Resumo:
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.