56 resultados para distributive lattices

em Queensland University of Technology - ePrints Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a broad lattice manipulation technique for expressive cryptography, and use it to realize functional encryption for access structures from post-quantum hardness assumptions. Specifically, we build an efficient key-policy attribute-based encryption scheme, and prove its security in the selective sense from learning-with-errors intractability in the standard model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis challenged the assumption that the Australian housing industry will voluntarily and independently transform its practices to build inclusive communities. Through its focus on perceptions of responsibility and the development of a theoretical framework for voluntary initiatives, the thesis offers key stakeholders and advocates a way to work towards the provision of inclusive housing as an instrument of distributive justice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this survey, we review a number of the many “expressive” encryption systems that have recently appeared from lattices, and explore the innovative techniques that underpin them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Teaching is a core function of higher education and must be effective if it is to provide students with learning experiences that are stimulating, challenging and rewarding Obtaining feedback on teaching is indispensable to enhancing the quality of learning design, facilitating personal and/or professional development and maximising student learning outcomes. Peer review of teaching has the potential to improve the quality of teaching at tertiary level, by encouraging critical reflection on teaching, innovation in teaching practice and scholarship of teaching at all academic levels. However, embedding peer review within the culture of teaching and learning is a significant challenge that requires sustained commitment from senior leadership as well as those in leadership roles within local contexts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the best intentions of service providers and organisations, service delivery is rarely error-free. While numerous studies have investigated specific cognitive, emotional or behavioural responses to service failure and recovery, these studies do not fully capture the complexity of the services encounter. Consequently, this research develops a more holistic understanding of how specific service recovery strategies affect the responses of customers by combining two existing models—Smith & Bolton’s (2002) model of emotional responses to service performance and Fullerton and Punj’s (1993) structural model of aberrant consumer behaviour—into a conceptual framework. Specific service recovery strategies are proposed to influence consumer cognition, emotion and behaviour. This research was conducted using a 2x2 between-subjects quasi-experimental design that was administered via written survey. The experimental design manipulated two levels of two specific service recovery strategies: compensation and apology. The effect of the four recovery strategies were investigated by collecting data from 18-25 year olds and were analysed using multivariate analysis of covariance and multiple regression analysis. The results suggest that different service recovery strategies are associated with varying scores of satisfaction, perceived distributive justice, positive emotions, negative emotions and negative functional behaviour, but not dysfunctional behaviour. These finding have significant implications for the theory and practice of managing service recovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resolving insurance disputes can focus only on quantum. Where insurers adopt integrative solutions they can enjoy cost savings and higher customer satisfaction. An integratively managed process can expand the negotiation options. The potential inherent in plaintiff’s emotions to resolve matters on an emotional basis, rather than an economic one, is explored. Using research, the author demonstrates how mediations are more likely to obtain integrative outcomes than unmediated conferences. Using a combination of governmental reports, published studies and academic publications, the paper demonstrates how mediation is more likely to foster an environment where the parties communicate and cooperate. Research is employed to demonstrate where mediators can reduce hostilities, in circumstances where negotiating parties alone would likely fail. Generally the paper constructs an argument to support the proposition that mediation can offer insurers an effective mechanism to reduce costs and increase customer satisfaction. INTRODUCTION Mediation can offer insurers an effective mechanism to reduce costs and increase customer satisfaction. This paper will first demonstrate the differences between distributive and integrative outcomes. It is argued insurer’s interest can be far better served through obtaining an integrative solution. The paper explains how the mediator can assist both parties to obtain an integrative outcome. Simultaneously the paper explores the extreme difficulties conference participants face in obtaining an integrative outcome without a mediator in an adversarial climate. The mediator’s ability to assist in the facilitation of integrative information exchange, defuse hostilities and reality check expectations is discussed. The mediator’s ability to facilitate in this area is compared to the inability of conference participants to achieve similar results. This paper concludes, the potential financial benefit offered by integrative solutions, combined with the ability of mediation to deliver such outcomes where unmediated conferences cannot deliver, leads to the recommendation that insurers opt for a mediation to best serve their commercial interests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this paper is to explore the changing nature of employee voice through trade union representation in the retail industry. The retail industry is a major employer in the UK and is one of the few private sector service industries with union representation (Griffin et al 2003). The requisite union: the Distributive and Allied Workers (USDAW) union is one of the biggest unions in the country. However, the characteristics of the industry provide unique challenges for employee voice and representation including: high labour turnover; high use of casual, female and student labour; and, variable levels of union recognition (Reynolds et al 2005). Irrespective of these challenges, any extension of representation and organisation by unions in the retail sector is inherently valuable, socially and politically, given that retail workers are often categorised as vulnerable, due to the fact that they are among the lowest paid in the economy, sourced from disadvantages labour markets and increasingly subject to atypical employment arrangements (Broadbridge 2002; Henley 2006; Lynch 2005; Roan 2003).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is a comparative exploratory study of the changing nature of employee voice through trade union representation in the retail industry in the UK and Australia. In both countries, the retail industry is a major employer and is one of the few private sector service industries with significant union membership (Griffin et al 2003). The relevant unions, the Distributive and Allied Workers Union (USDAW) and the Shop, Distributive and Allied Union (SDA), are the fourth largest and largest unions in the UK and Australia respectively. However, despite this seeming numerical strength in membership, the characteristics of the industry provide unique challenges for employee voice and representation. The significance of the study is that any extension of representation and organisation by unions in the retail sector is valuable socially and politically, given that retail workers are often categorised a s vulnerable, due to their low pay, the predominance of disadvantaged labour market groups such a s women and young people, workers’ atypical employment arrangements and, in the case of the UK, variable levels of union recognition which inhibit representation (Broadbridge 2002; Henley 2006; Lynch 2005; Roan & Diamond 2003; Reynolds et al 2005). In addition, specifically comparative projects have value in that they allow some variables relating to the ‘industry’ to be held constant, thus reducing the range of potential explanations of differences in union strategy. They also have value in that the research partners may be more likely to notice and problematise taken-for-granted aspects of practices in another country, thus bringing to the fore key features and potentially leading to theoretical innovation. Finally, such projects may assist in transnational diffusion of union strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spoken term detection (STD) popularly involves performing word or sub-word level speech recognition and indexing the result. This work challenges the assumption that improved speech recognition accuracy implies better indexing for STD. Using an index derived from phone lattices, this paper examines the effect of language model selection on the relationship between phone recognition accuracy and STD accuracy. Results suggest that language models usually improve phone recognition accuracy but their inclusion does not always translate to improved STD accuracy. The findings suggest that using phone recognition accuracy to measure the quality of an STD index can be problematic, and highlight the need for an alternative that is more closely aligned with the goals of the specific detection task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The existence of any film genre depends on the effective operation of distribution networks. Contingencies of distribution play an important role in determining the content of individual texts and the characteristics of film genres; they enable new genres to emerge at the same time as they impose limits on generic change. This article sets out an alternative way of doing genre studies, based on an analysis of distributive circuits rather than film texts or generic categories. Our objective is to provide a conceptual framework that can account for the multiple ways in which distribution networks leave their traces on film texts and audience expectations, with specific reference to international horror networks, and to offer some preliminary suggestions as to how distribution analysis can be integrated into existing genre studies methodologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The stylized facts that motivate this thesis include the diversity in growth patterns that are observed across countries during the process of economic development, and the divergence over time in income distributions both within and across countries. This thesis constructs a dynamic general equilibrium model in which technology adoption is costly and agents are heterogeneous in their initial holdings of resources. Given the households‟ resource level, this study examines how adoption costs influence the evolution of household income over time and the timing of transition to more productive technologies. The analytical results of the model constructed here characterize three growth outcomes associated with the technology adoption process depending on productivity differences between the technologies. These are appropriately labeled as „poverty trap‟, „dual economy‟ and „balanced growth‟. The model is then capable of explaining the observed diversity in growth patterns across countries, as well as divergence of incomes over time. Numerical simulations of the model furthermore illustrate features of this transition. They suggest that that differences in adoption costs account for the timing of households‟ decision to switch technology which leads to a disparity in incomes across households in the technology adoption process. Since this determines the timing of complete adoption of the technology within a country, the implications for cross-country income differences are obvious. Moreover, the timing of technology adoption appears to be impacts on patterns of growth of households, which are different across various income groups. The findings also show that, in the presence of costs associated with the adoption of more productive technologies, inequalities of income and wealth may increase over time tending to delay the convergence in income levels. Initial levels of inequalities in the resources also have an impact on the date of complete adoption of more productive technologies. The issue of increasing income inequality in the process of technology adoption opens up another direction for research. Specifically increasing inequality implies that distributive conflicts may emerge during the transitional process with political- economy consequences. The model is therefore extended to include such issues. Without any political considerations, taxes would leads to a reduction in inequality and convergence of incomes across agents. However this process is delayed if politico-economic influences are taken into account. Moreover, the political outcome is sub optimal. This is essentially due to the fact that there is a resistance associated with the complete adoption of the advanced technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.