868 resultados para Debugging in computer science.
Resumo:
For the past several decades, cryptographers have consistently provided us with stronger and more capable primitives and protocols that have found many applications in security systems in everyday life. One of the central tenets of cryptographic design is that, whereas a system’s architecture ought to be public and open to scrutiny, the keys on which it depends — long, utterly random, unique strings of bits — will be perfectly preserved by their owner, and yet nominally inaccessible to foes.
Resumo:
Multiple-time signatures are digital signature schemes where the signer is able to sign a predetermined number of messages. They are interesting cryptographic primitives because they allow to solve many important cryptographic problems, and at the same time offer substantial efficiency advantage over ordinary digital signature schemes like RSA. Multiple-time signature schemes have found numerous applications, in ordinary, on-line/off-line, forward-secure signatures, and multicast/stream authentication. We propose a multiple-time signature scheme with very efficient signing and verifying. Our construction is based on a combination of one-way functions and cover-free families, and it is secure against the adaptive chosen-message attack.
Resumo:
A known limitation of the Probability Ranking Principle (PRP) is that it does not cater for dependence between documents. Recently, the Quantum Probability Ranking Principle (QPRP) has been proposed, which implicitly captures dependencies between documents through “quantum interference”. This paper explores whether this new ranking principle leads to improved performance for subtopic retrieval, where novelty and diversity is required. In a thorough empirical investigation, models based on the PRP, as well as other recently proposed ranking strategies for subtopic retrieval (i.e. Maximal Marginal Relevance (MMR) and Portfolio Theory(PT)), are compared against the QPRP. On the given task, it is shown that the QPRP outperforms these other ranking strategies. And unlike MMR and PT, one of the main advantages of the QPRP is that no parameter estimation/tuning is required; making the QPRP both simple and effective. This research demonstrates that the application of quantum theory to problems within information retrieval can lead to significant improvements.
Resumo:
Recently, mean-variance analysis has been proposed as a novel paradigm to model document ranking in Information Retrieval. The main merit of this approach is that it diversifies the ranking of retrieved documents. In its original formulation, the strategy considers both the mean of relevance estimates of retrieved documents and their variance. How- ever, when this strategy has been empirically instantiated, the concepts of mean and variance are discarded in favour of a point-wise estimation of relevance (to replace the mean) and of a parameter to be tuned or, alternatively, a quantity dependent upon the document length (to replace the variance). In this paper we revisit this ranking strategy by going back to its roots: mean and variance. For each retrieved document, we infer a relevance distribution from a series of point-wise relevance estimations provided by a number of different systems. This is used to compute the mean and the variance of document relevance estimates. On the TREC Clueweb collection, we show that this approach improves the retrieval performances. This development could lead to new strategies to address the fusion of relevance estimates provided by different systems.
Resumo:
The aim of this paper is to investigate the role of emotion features in diversifying document rankings to improve the effectiveness of Information Retrieval (IR) systems. For this purpose, two approaches are proposed to consider emotion features for diversification, and they are empirically tested on the TREC 678 Interactive Track collection. The results show that emotion features are capable of enhancing retrieval effectiveness.
Resumo:
Complex numbers are a fundamental aspect of the mathematical formalism of quantum physics. Quantum-like models developed outside physics often overlooked the role of complex numbers. Specifically, previous models in Information Retrieval (IR) ignored complex numbers. We argue that to advance the use of quantum models of IR, one has to lift the constraint of real-valued representations of the information space, and package more information within the representation by means of complex numbers. As a first attempt, we propose a complex-valued representation for IR, which explicitly uses complex valued Hilbert spaces, and thus where terms, documents and queries are represented as complex-valued vectors. The proposal consists of integrating distributional semantics evidence within the real component of a term vector; whereas, ontological information is encoded in the imaginary component. Our proposal has the merit of lifting the role of complex numbers from a computational byproduct of the model to the very mathematical texture that unifies different levels of semantic information. An empirical instantiation of our proposal is tested in the TREC Medical Record task of retrieving cohorts for clinical studies.
Resumo:
In this paper we present truncated differential analysis of reduced-round LBlock by computing the differential distribution of every nibble of the state. LLR statistical test is used as a tool to apply the distinguishing and key-recovery attacks. To build the distinguisher, all possible differences are traced through the cipher and the truncated differential probability distribution is determined for every output nibble. We concatenate additional rounds to the beginning and end of the truncated differential distribution to apply the key-recovery attack. By exploiting properties of the key schedule, we obtain a large overlap of key bits used in the beginning and final rounds. This allows us to significantly increase the differential probabilities and hence reduce the attack complexity. We validate the analysis by implementing the attack on LBlock reduced to 12 rounds. Finally, we apply single-key and related-key attacks on 18 and 21-round LBlock, respectively.
Resumo:
We show that the LASH-x hash function is vulnerable to attacks that trade time for memory, including collision attacks as fast as 2(4x/11) and preimage attacks as fast as 2(4x/7). Moreover, we briefly mention heuristic lattice based collision attacks that use small memory but require very long messages that are expected to find collisions much faster than 2 x/2. All of these attacks exploit the designers’ choice of an all zero IV. We then consider whether LASH can be patched simply by changing the IV. In this case, we show that LASH is vulnerable to a 2(7x/8) preimage attack. We also show that LASH is trivially not a PRF when any subset of input bytes is used as a secret key. None of our attacks depend upon the particular contents of the LASH matrix – we only assume that the distribution of elements is more or less uniform.
Resumo:
User evaluations using paper prototypes commonly lack social context. The Group simulation technique described in this paper offers a solution to this problem. The study introduces an early-phase participatory design technique targeted for small groups. The proposed technique is used for evaluating an interface, which enables group work in photo collection creation. Three groups of four users, 12 in total, took part in a simulation session where they tested a low-fidelity design concept that included their own personal photo content from an event that their group attended together. The users’ own content was used to evoke natural experiences. Our results indicate that the technique helped users to naturally engage with the prototype in the session. The technique is suggested to be suitable for evaluating other early-phase concepts and to guide design solutions, especially with the concepts that include users’ personal content and enable content sharing.
Resumo:
Sharing photos through mobile devices has a great potential for creating shared experiences of social events between co-located as well as remote participants. In order to design novel event sharing tools, we need to develop in-depth understanding of current practices surrounding these so called ‘event photos’- photos about and taken during different social events such as weddings picnics, and music concert visits among others. We studied people’s practices related to event photos through in-depth interviews, guided home visits and naturalistic observations. Our results show four major themes describing practices surrounding event photos: 1) representing events, 2) significant moments, 3) situated activities through photos, and 4) collectivism and roles of participants.
Resumo:
User-generated content plays a pivotal role in the current social media. The main focus, however, has been on the explicitly generated user content such as photos, videos and status updates on different social networking sites. In this paper, we explore the potential of implicitly generated user content, based on users’ online consumption behaviors. It is technically feasible to record users’ consumption behaviors on mobile devices and share that with relevant people. Mobile devices with such capabilities could enrich social interactions around the consumed content, but it may also threaten users’ privacy. To understand the potentials of this design direction we created and evaluated a low-fidelity prototype intended for photo sharing within private groups. Our prototype incorporates two design concepts, namely, FingerPrint and MoodPhotos that leverage users’ consumption history and emotional responses. In this paper, we report user values and user acceptance of this prototype from three participatory design workshops.
Resumo:
Although recommender systems and reputation systems have quite different theoretical and technical bases, both types of systems have the purpose of providing advice for decision making in e-commerce and online service environments. The similarity in purpose makes it natural to integrate both types of systems in order to produce better online advice, but their difference in theory and implementation makes the integration challenging. In this paper, we propose to use mappings to subjective opinions from values produced by recommender systems as well as from scores produced by reputation systems, and to combine the resulting opinions within the framework of subjective logic.
Resumo:
Rakaposhi is a synchronous stream cipher, which uses three main components: a non-linear feedback shift register (NLFSR), a dynamic linear feedback shift register (DLFSR) and a non-linear filtering function (NLF). NLFSR consists of 128 bits and is initialised by the secret key K. DLFSR holds 192 bits and is initialised by an initial vector (IV). NLF takes 8-bit inputs and returns a single output bit. The work identifies weaknesses and properties of the cipher. The main observation is that the initialisation procedure has the so-called sliding property. The property can be used to launch distinguishing and key recovery attacks. The distinguisher needs four observations of the related (K,IV) pairs. The key recovery algorithm allows to discover the secret key K after observing 29 pairs of (K,IV). Based on the proposed related-key attack, the number of related (K,IV) pairs is 2(128 + 192)/4 pairs. Further the cipher is studied when the registers enter short cycles. When NLFSR is set to all ones, then the cipher degenerates to a linear feedback shift register with a non-linear filter. Consequently, the initial state (and Secret Key and IV) can be recovered with complexity 263.87. If DLFSR is set to all zeros, then NLF reduces to a low non-linearity filter function. As the result, the cipher is insecure allowing the adversary to distinguish it from a random cipher after 217 observations of keystream bits. There is also the key recovery algorithm that allows to find the secret key with complexity 2 54.
Resumo:
In this paper we investigate the differential properties of block ciphers in hash function modes of operation. First we show the impact of differential trails for block ciphers on collision attacks for various hash function constructions based on block ciphers. Further, we prove the lower bound for finding a pair that follows some truncated differential in case of a random permutation. Then we present open-key differential distinguishers for some well known round-reduced block ciphers.
Resumo:
At Eurocrypt’04, Freedman, Nissim and Pinkas introduced a fuzzy private matching problem. The problem is defined as follows. Given two parties, each of them having a set of vectors where each vector has T integer components, the fuzzy private matching is to securely test if each vector of one set matches any vector of another set for at least t components where t < T. In the conclusion of their paper, they asked whether it was possible to design a fuzzy private matching protocol without incurring a communication complexity with the factor (T t ) . We answer their question in the affirmative by presenting a protocol based on homomorphic encryption, combined with the novel notion of a share-hiding error-correcting secret sharing scheme, which we show how to implement with efficient decoding using interleaved Reed-Solomon codes. This scheme may be of independent interest. Our protocol is provably secure against passive adversaries, and has better efficiency than previous protocols for certain parameter values.