902 resultados para GENERIC SIMPLICITY
The suffix-free-prefix-free hash function construction and its indifferentiability security analysis
Resumo:
In this paper, we observe that in the seminal work on indifferentiability analysis of iterated hash functions by Coron et al. and in subsequent works, the initial value (IV) of hash functions is fixed. In addition, these indifferentiability results do not depend on the Merkle–Damgård (MD) strengthening in the padding functionality of the hash functions. We propose a generic n -bit-iterated hash function framework based on an n -bit compression function called suffix-free-prefix-free (SFPF) that works for arbitrary IV s and does not possess MD strengthening. We formally prove that SFPF is indifferentiable from a random oracle (RO) when the compression function is viewed as a fixed input-length random oracle (FIL-RO). We show that some hash function constructions proposed in the literature fit in the SFPF framework while others that do not fit in this framework are not indifferentiable from a RO. We also show that the SFPF hash function framework with the provision of MD strengthening generalizes any n -bit-iterated hash function based on an n -bit compression function and with an n -bit chaining value that is proven indifferentiable from a RO.
Resumo:
At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar to the second preimage resistance rather than on the collision resistance property of the hash functions. One of the randomized hash function modes was named the RMX hash function mode and was recommended for practical purposes. The National Institute of Standards and Technology (NIST), USA standardized a variant of the RMX hash function mode and published this standard in the Special Publication (SP) 800-106. In this article, we first discuss a generic online birthday existential forgery attack of Dang and Perlner on the RMX-hash-then-sign schemes. We show that a variant of this attack can be applied to forge the other randomize-hash-then-sign schemes. We point out practical limitations of the generic forgery attack on the RMX-hash-then-sign schemes. We then show that these limitations can be overcome for the RMX-hash-then-sign schemes if it is easy to find fixed points for the underlying compression functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online birthday forgery attack on this class of signatures by using a variant of Dean’s method of finding fixed point expandable messages for hash functions based on the Davies-Meyer construction. This forgery attack is also applicable to signature schemes based on the variant of RMX standardized by NIST in SP 800-106. We discuss some important applications of our attacks and discuss their applicability on signature schemes based on hash functions with ‘built-in’ randomization. Finally, we compare our attacks on randomize-hash-then-sign schemes with the generic forgery attacks on the standard hash-based message authentication code (HMAC).
Resumo:
We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto’04. We also apply our attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least 226 and 254, respectively. Our herding and multicollision attacks on the hash functions based on generic checksum functions (e.g., one-way) are a special case of the attacks on the cascaded iterated hash functions previously analysed by Dunkelman and Preneel and are not better than their attacks. On hash functions with easily invertible checksums, our multicollision and herding attacks (if the hash value is short as in MD2) are more efficient than those of Dunkelman and Preneel.
Resumo:
Grøstl is a SHA-3 candidate proposal. Grøstl is an iterated hash function with a compression function built from two fixed, large, distinct permutations. The design of Grøstl is transparent and based on principles very different from those used in the SHA-family. The two permutations are constructed using the wide trail design strategy, which makes it possible to give strong statements about the resistance of Grøstl against large classes of cryptanalytic attacks. Moreover, if these permutations are assumed to be ideal, there is a proof for the security of the hash function. Grøstl is a byte-oriented SP-network which borrows components from the AES. The S-box used is identical to the one used in the block cipher AES and the diffusion layers are constructed in a similar manner to those of the AES. As a consequence there is a very strong confusion and diffusion in Grøstl. Grøstl is a so-called wide-pipe construction where the size of the internal state is significantly larger than the size of the output. This has the effect that all known, generic attacks on the hash function are made much more difficult. Grøstl has good performance on a wide range of platforms and counter-measures against side-channel attacks are well-understood from similar work on the AES.
Resumo:
This article discusses the design of interactive online activities that introduce problem solving skills to first year law students. They are structured around the narrative framework of ‘Ruby’s Music Festival’ where a young business entrepreneur encounters various issues when organising a music festival and students use a generic problem solving method to provide legal solutions. These online activities offer students the opportunity to obtain early formative feedback on their legal problem solving abilities prior to undertaking a later summative assessment task. The design of the activities around the Ruby narrative framework and the benefits of providing students with early formative feedback will be discussed.
Resumo:
Multidimensional data are getting increasing attention from researchers for creating better recommender systems in recent years. Additional metadata provides algorithms with more details for better understanding the interaction between users and items. While neighbourhood-based Collaborative Filtering (CF) approaches and latent factor models tackle this task in various ways effectively, they only utilize different partial structures of data. In this paper, we seek to delve into different types of relations in data and to understand the interaction between users and items more holistically. We propose a generic multidimensional CF fusion approach for top-N item recommendations. The proposed approach is capable of incorporating not only localized relations of user-user and item-item but also latent interaction between all dimensions of the data. Experimental results show significant improvements by the proposed approach in terms of recommendation accuracy.
Resumo:
This article is concerned with the many connections between creative work and workers, and education work and industries. Employment in the education sector has long been recognised as a significant element in creative workers’portfolio careers. Much has been written, for exam- ple, about the positive contribution of ‘artists in schools’ initiatives. Australian census analyses reveal that education is the most common industry sector into which creative workers are ‘embedded’, outside of the core creative industries. However, beyond case studies and some survey research into arts instruction and instructors, we know remarkably little about in which education roles and sectors creative workers are embedded, and the types of value that they add in those roles and sectors. This article reviews the extant literature on creative work and workers in education, and presents the findings of a survey of 916 graduates from creative undergraduate degrees in Australia. The findings suggest that education work is very common among creative graduates indeed, while there are a range of motivating factors for education work among creative graduates, on average they are satisfied with their careers, and that creative graduates add significant creative-cultural and creative-generic value add through their work.
Resumo:
Final report for the Australian Government Office for Learning and Teaching. "This seed project ‘Design thinking frameworks as transformative cross-disciplinary pedagogy’ aimed to examine the way design thinking strategies are used across disciplines to scaffold the development of student attributes in the domain of problem solving and creativity in order to enhance the nation’s capacity for innovation. Generic graduate attributes associated with innovation, creativity and problem solving are considered to be amongst the most important of all targeted attributes (Bradley Review of Higher Education, 2009). The project also aimed to gather data on how academics across disciplines conceptualised design thinking methodologies and strategies. Insights into how design thinking strategies could be embedded at the subject level to improve student outcomes were of particular interest in this regard. A related aim was the investigation of how design thinking strategies could be used by academics when designing new and innovative subjects and courses." Case Study 3: QUT Community Engaged Learning Lab Design Thinking/Design Led Innovation Workshop by Natalie Wright Context "The author, from the discipline area of Interior Design in the QUT School of Design, Faculty of Creative Industries, is a contributing academic and tutor for The Community Engaged Learning Lab, which was initiated at Queensland University of Technology in 2012. The Lab facilitates university-wide service-learning experiences and engages students, academics, and key community organisations in interdisciplinary action research projects to support student learning and to explore complex and ongoing problems nominated by the community partners. In Week 3, Semester One 2013, with the assistance of co-lead Dr Cara Wrigley, Senior Lecturer in Design led Innovation, a Masters of Architecture research student and nine participating industry-embedded Masters of Research (Design led Innovation) facilitators, a Design Thinking/Design led Innovation workshop was conducted for the Community Engaged Learning Lab students, and action research outcomes published at 2013 Tsinghua International Design Management Symposium, December 2013 in Shenzhen, China (Morehen, Wright, & Wrigley, 2013)."
Resumo:
We present an overview of the QUT plant classification system submitted to LifeCLEF 2014. This system uses generic features extracted from a convolutional neural network previously used to perform general object classification. We examine the effectiveness of these features to perform plant classification when used in combination with an extremely randomised forest. Using this system, with minimal tuning, we obtained relatively good results with a score of 0:249 on the test set of LifeCLEF 2014.
Resumo:
Dried plant food products are increasing in demand in the consumer market, leading to continuing research to develop better products and processing techniques. Plant materials are porous structures, which undergo large deformations during drying. For any given food material, porosity and other cellular parameters have a direct influence on the level of shrinkage and deformation characteristics during drying, which involve complex mechanisms. In order to better understand such mechanisms and their interrelationships, numerical modelling can be used as a tool. In contrast to conventional grid-based modelling techniques, it is considered that meshfree methods may have a higher potential for modelling large deformations of multiphase problem domains. This work uses a meshfree based microscale plant tissue drying model, which was recently developed by the authors. Here, the effects of porosity have been newly accounted for in the model with the objective of studying porosity development during drying and its influence on shrinkage at the cellular level. For simplicity, only open pores are modelled and in order to investigate the influence of different cellular parameters, both apple and grape tissues were used in the study. The simulation results indicated that the porosity negatively influences shrinkage during drying and the porosity decreases as the moisture content reduces (when open pores are considered). Also, there is a clear difference in the deformations of cells, tissues and pores, which is mainly influenced by the cell wall contraction effects during drying.
Resumo:
So far, low probability differentials for the key schedule of block ciphers have been used as a straightforward proof of security against related-key differential analysis. To achieve resistance, it is believed that for cipher with k-bit key it suffices the upper bound on the probability to be 2− k . Surprisingly, we show that this reasonable assumption is incorrect, and the probability should be (much) lower than 2− k . Our counter example is a related-key differential analysis of the well established block cipher CLEFIA-128. We show that although the key schedule of CLEFIA-128 prevents differentials with a probability higher than 2− 128, the linear part of the key schedule that produces the round keys, and the Feistel structure of the cipher, allow to exploit particularly chosen differentials with a probability as low as 2− 128. CLEFIA-128 has 214 such differentials, which translate to 214 pairs of weak keys. The probability of each differential is too low, but the weak keys have a special structure which allows with a divide-and-conquer approach to gain an advantage of 27 over generic analysis. We exploit the advantage and give a membership test for the weak-key class and provide analysis of the hashing modes. The proposed analysis has been tested with computer experiments on small-scale variants of CLEFIA-128. Our results do not threaten the practical use of CLEFIA.
Resumo:
Railways are an important mode of transportation. They are however large and complex and their construction, management and operation is time consuming and costly. Evidently planning the current and future activities is vital. Part of that planning process is an analysis of capacity. To determine what volume of traffic can be achieved over time, a variety of railway capacity analysis techniques have been created. A generic analytical approach that incorporates more complex train paths however has yet to be provided. This article provides such an approach. This article extends a mathematical model for determining the theoretical capacity of a railway network. The main contribution of this paper is the modelling of more complex train paths whereby each section can be visited many times in the course of a train’s journey. Three variant models are formulated and then demonstrated in a case study. This article’s numerical investigations have successively shown the applicability of the proposed models and how they may be used to gain insights into system performance.
Resumo:
There is often a gap between teaching beliefs and actual practice, between ‘what is valued and what is taught’ (Jones, 2009, p. 175). This may be particularly true when it comes to teaching creatively and teaching for creativity in higher education. This lack of congruence is not necessarily due to a lack of awareness about what is possible, or the desire to enact change in this domain. It may, however, be due to a mix of less easily manipulated contextual factors (environmental, socio-cultural, political and economic), and a lack of discourse (Jackson, 2006) around the problem...
Resumo:
This paper presents a technique for the automated removal of noise from process execution logs. Noise is the result of data quality issues such as logging errors and manifests itself in the form of infrequent process behavior. The proposed technique generates an abstract representation of an event log as an automaton capturing the direct follows relations between event labels. This automaton is then pruned from arcs with low relative frequency and used to remove from the log those events not fitting the automaton, which are identified as outliers. The technique has been extensively evaluated on top of various auto- mated process discovery algorithms using both artificial logs with different levels of noise, as well as a variety of real-life logs. The results show that the technique significantly improves the quality of the discovered process model along fitness, appropriateness and simplicity, without negative effects on generalization. Further, the technique scales well to large and complex logs.
Resumo:
This paper reviews a variety of advanced signal processing algorithms that have been developed at the University of Southampton as part of the Prometheus (PROgraMme for European Traffic flow with Highest Efficiency and Unprecedented Safety) research programme to achieve an intelligent driver warning system (IDWS). The IDWS includes: visual detection of both generic obstacles and other vehicles, together with their tracking and identification, estimates of time to collision and behavioural modelling of drivers for a variety of scenarios. These application areas are used to show the applicability of neurofuzzy techniques to the wide range of problems required to support an IDWS, and for future fully autonomous vehicles.