981 resultados para Set functions.
Resumo:
Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block cipher is ideal. We address the problem of building indifferentiable compression functions from the PGV compression functions. We consider a general form of 64 PGV compression functions and replace the linear feed-forward operation in this generic PGV compression function with an ideal block cipher independent of the one used in the generic PGV construction. This modified construction is called a generic modified PGV (MPGV). We analyse indifferentiability of the generic MPGV construction in the ideal cipher model and show that 12 out of 64 MPGV compression functions in this framework are indifferentiable from a FIL-RO. To our knowledge, this is the first result showing that two independent block ciphers are sufficient to design indifferentiable single-block-length compression functions.
Resumo:
Measurements of half-field beam penumbra were taken using EBT2 film for a variety of blocking techniques. It was shown that minimizing the SSD reduces the penumbra as the effects of beam divergence are diminished. The addition of a lead block directly on the surface provides optimal results with a 10-90% penumbra of 0.53 ± 0.02 cm. To resolve the uncertainties encountered in film measurements, future Monte Carlo measurements of halffield penumbras are to be conducted.
Resumo:
With the overwhelming increase in the amount of data on the web and data bases, many text mining techniques have been proposed for mining useful patterns in text documents. Extracting closed sequential patterns using the Pattern Taxonomy Model (PTM) is one of the pruning methods to remove noisy, inconsistent, and redundant patterns. However, PTM model treats each extracted pattern as whole without considering included terms, which could affect the quality of extracted patterns. This paper propose an innovative and effective method that extends the random set to accurately weigh patterns based on their distribution in the documents and their terms distribution in patterns. Then, the proposed approach will find the specific closed sequential patterns (SCSP) based on the new calculated weight. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms other state-of-the-art methods in different popular measures.
Resumo:
Structural damage detection using measured dynamic data for pattern recognition is a promising approach. These pattern recognition techniques utilize artificial neural networks and genetic algorithm to match pattern features. In this study, an artificial neural network–based damage detection method using frequency response functions is presented, which can effectively detect nonlinear damages for a given level of excitation. The main objective of this article is to present a feasible method for structural vibration–based health monitoring, which reduces the dimension of the initial frequency response function data and transforms it into new damage indices and employs artificial neural network method for detecting different levels of nonlinearity using recognized damage patterns from the proposed algorithm. Experimental data of the three-story bookshelf structure at Los Alamos National Laboratory are used to validate the proposed method. Results showed that the levels of nonlinear damages can be identified precisely by the developed artificial neural networks. Moreover, it is identified that artificial neural networks trained with summation frequency response functions give higher precise damage detection results compared to the accuracy of artificial neural networks trained with individual frequency response functions. The proposed method is therefore a promising tool for structural assessment in a real structure because it shows reliable results with experimental data for nonlinear damage detection which renders the frequency response function–based method convenient for structural health monitoring.
Resumo:
Cryptographic hash functions are an important tool of cryptography and play a fundamental role in efficient and secure information processing. A hash function processes an arbitrary finite length input message to a fixed length output referred to as the hash value. As a security requirement, a hash value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash functions, also called message authentication codes (MACs) serve data integrity and data origin authentication in the secret key setting. The building blocks of hash functions can be designed using block ciphers, modular arithmetic or from scratch. The design principles of the popular Merkle–Damgård construction are followed in almost all widely used standard hash functions such as MD5 and SHA-1.
Resumo:
We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto’04. We also apply our attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least 226 and 254, respectively. Our herding and multicollision attacks on the hash functions based on generic checksum functions (e.g., one-way) are a special case of the attacks on the cascaded iterated hash functions previously analysed by Dunkelman and Preneel and are not better than their attacks. On hash functions with easily invertible checksums, our multicollision and herding attacks (if the hash value is short as in MD2) are more efficient than those of Dunkelman and Preneel.
Resumo:
In this paper we present concrete collision and preimage attacks on a large class of compression function constructions making two calls to the underlying ideal primitives. The complexity of the collision attack is above the theoretical lower bound for constructions of this type, but below the birthday complexity; the complexity of the preimage attack, however, is equal to the theoretical lower bound. We also present undesirable properties of some of Stam’s compression functions proposed at CRYPTO ’08. We show that when one of the n-bit to n-bit components of the proposed 2n-bit to n-bit compression function is replaced by a fixed-key cipher in the Davies-Meyer mode, the complexity of finding a preimage would be 2 n/3. We also show that the complexity of finding a collision in a variant of the 3n-bits to 2n-bits scheme with its output truncated to 3n/2 bits is 2 n/2. The complexity of our preimage attack on this hash function is about 2 n . Finally, we present a collision attack on a variant of the proposed m + s-bit to s-bit scheme, truncated to s − 1 bits, with a complexity of O(1). However, none of our results compromise Stam’s security claims.
Resumo:
Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX-hash-then-sign signature scheme, one has to solve a cryptanalytical task which is related to finding second preimages for the hash function. In this article, we will show how to use Dean’s method of finding expandable messages for finding a second preimage in the Merkle-Damgård hash function to existentially forge a signature scheme based on a t-bit RMX-hash function which uses the Davies-Meyer compression functions (e.g., MD4, MD5, SHA family) in 2 t/2 chosen messages plus 2 t/2 + 1 off-line operations of the compression function and similar amount of memory. This forgery attack also works on the signature schemes that use Davies-Meyer schemes and a variant of RMX published by NIST in its Draft Special Publication (SP) 800-106. We discuss some important applications of our attack.
Resumo:
In the modern era of information and communication technology, cryptographic hash functions play an important role in ensuring the authenticity, integrity, and nonrepudiation goals of information security as well as efficient information processing. This entry provides an overview of the role of hash functions in information security, popular hash function designs, some important analytical results, and recent advances in this field.
Resumo:
Mode indicator functions (MIFs) are used in modal testing and analysis as a means of identifying modes of vibration, often as a precursor to modal parameter estimation. Various methods have been developed since the MIF was introduced four decades ago. These methods are quite useful in assisting the analyst to identify genuine modes and, in the case of the complex mode indicator function, have even been developed into modal parameter estimation techniques. Although the various MIFs are able to indicate the existence of a mode, they do not provide the analyst with any descriptive information about the mode. This paper uses the simple summation type of MIF to develop five averaged and normalised MIFs that will provide the analyst with enough information to identify whether a mode is longitudinal, vertical, lateral or torsional. The first three functions, termed directional MIFs, have been noted in the literature in one form or another; however, this paper introduces a new twist on the MIF by introducing two MIFs, termed torsional MIFs, that can be used by the analyst to identify torsional modes and, moreover, can assist in determining whether the mode is of a pure torsion or sway type (i.e., having a rigid cross-section) or a distorted twisting type. The directional and torsional MIFs are tested on a finite element model based simulation of an experimental modal test using an impact hammer. Results indicate that the directional and torsional MIFs are indeed useful in assisting the analyst to identify whether a mode is longitudinal, vertical, lateral, sway, or torsion.
Resumo:
There has been a paucity of research published in relation to the temporal aspect of destination image change over time. Given increasing investments in destination branding, research is needed to enhance understanding of how to monitor destination brand performance, of which destination image is the core construct, over time. This article reports the results of four studies tracking brand performance of a competitive set of five destinations, between 2003 and 2012. Results indicate minimal changes in perceptions held of the five destinations of interest over the 10 years, supporting the assertion of Gartner (1986) and Gartner and Hunt (1987) that destination image change will only occur slowly over time. While undertaken in Australia, the research approach provides DMOs in other parts of the world with a practical tool for evaluating brand performance over time; in terms of measures of effectiveness of past marketing communications, and indicators of future performance.
Resumo:
The Commission has been asked to identify appropriate options for reducing entry and exit barriers including advice on the potential impacts of the personal/corporate insolvency regimes on business exits...
Resumo:
The Commission has released a Draft Report on Business Set-Up, Transfer and Closure for public consultation and input. It is pleasing to note that three chapters of the Draft Report address aspects of personal and corporate insolvency. Nevertheless, we continue to make the submission to national policy inquiries and discussions that a comprehensive review should be undertaken of the regulation of insolvency and restructuring in Australia. The last comprehensive review of the insolvency system was by the Australian Law Reform Commission (the Harmer Report) and was handed down in 1988. Whilst there have been aspects of our insolvency laws that have been reviewed since that time, none has been able to provide the clear and comprehensive analysis that is able to come from a more considered review. Such a review ought to be conducted by the Australian Law Reform Commission or similar independent panel set up for the task. We also suggest that there is a lack of data available to assist with addressing questions raised by the Draft Report. There is a need to invest in finding out, in a rigorous and informed way, how the current law operates. Until there is a willingness to make a public investment in such research with less reliance upon the anecdotal (often from well-meaning but ultimately inadequately informed participants and others) the government cannot be sure that the insolvency regime we have provides the most effective regime to underpin Australia’s commercial and financial dealings, nor that any change is justified. We also make the submission that there are benefits in a serious investigation into a merged regulatory architecture of personal and corporate insolvency and a combined personal and corporate insolvency regulator.
Resumo:
We propose a new information-theoretic metric, the symmetric Kullback-Leibler divergence (sKL-divergence), to measure the difference between two water diffusivity profiles in high angular resolution diffusion imaging (HARDI). Water diffusivity profiles are modeled as probability density functions on the unit sphere, and the sKL-divergence is computed from a spherical harmonic series, which greatly reduces computational complexity. Adjustment of the orientation of diffusivity functions is essential when the image is being warped, so we propose a fast algorithm to determine the principal direction of diffusivity functions using principal component analysis (PCA). We compare sKL-divergence with other inner-product based cost functions using synthetic samples and real HARDI data, and show that the sKL-divergence is highly sensitive in detecting small differences between two diffusivity profiles and therefore shows promise for applications in the nonlinear registration and multisubject statistical analysis of HARDI data.