868 resultados para Debugging in computer science.
Resumo:
Most previous work on unconditionally secure multiparty computation has focused on computing over a finite field (or ring). Multiparty computation over other algebraic structures has not received much attention, but is an interesting topic whose study may provide new and improved tools for certain applications. At CRYPTO 2007, Desmedt et al introduced a construction for a passive-secure multiparty multiplication protocol for black-box groups, reducing it to a certain graph coloring problem, leaving as an open problem to achieve security against active attacks. We present the first n-party protocol for unconditionally secure multiparty computation over a black-box group which is secure under an active attack model, tolerating any adversary structure Δ satisfying the Q 3 property (in which no union of three subsets from Δ covers the whole player set), which is known to be necessary for achieving security in the active setting. Our protocol uses Maurer’s Verifiable Secret Sharing (VSS) but preserves the essential simplicity of the graph-based approach of Desmedt et al, which avoids each shareholder having to rerun the full VSS protocol after each local computation. A corollary of our result is a new active-secure protocol for general multiparty computation of an arbitrary Boolean circuit.
Resumo:
NTRUEncrypt is a fast and practical lattice-based public-key encryption scheme, which has been standardized by IEEE, but until recently, its security analysis relied only on heuristic arguments. Recently, Stehlé and Steinfeld showed that a slight variant (that we call pNE) could be proven to be secure under chosen-plaintext attack (IND-CPA), assuming the hardness of worst-case problems in ideal lattices. We present a variant of pNE called NTRUCCA, that is IND-CCA2 secure in the standard model assuming the hardness of worst-case problems in ideal lattices, and only incurs a constant factor overhead in ciphertext and key length over the pNE scheme. To our knowledge, our result gives the first IND-CCA2 secure variant of NTRUEncrypt in the standard model, based on standard cryptographic assumptions. As an intermediate step, we present a construction for an All-But-One (ABO) lossy trapdoor function from pNE, which may be of independent interest. Our scheme uses the lossy trapdoor function framework of Peikert and Waters, which we generalize to the case of (k − 1)-of-k-correlated input distributions.
Resumo:
In this article, we study the security of the IDEA block cipher when it is used in various simple-length or double-length hashing modes. Even though this cipher is still considered as secure, we show that one should avoid its use as internal primitive for block cipher based hashing. In particular, we are able to generate instantaneously free-start collisions for most modes, and even semi-free-start collisions, pseudo-preimages or hash collisions in practical complexity. This work shows a practical example of the gap that exists between secret-key and known or chosen-key security for block ciphers. Moreover, we also settle the 20-year-old standing open question concerning the security of the Abreast-DM and Tandem-DM double-length compression functions, originally invented to be instantiated with IDEA. Our attacks have been verified experimentally and work even for strengthened versions of IDEA with any number of rounds.
Resumo:
This paper presents ongoing work toward constructing efficient completely non-malleable public-key encryption scheme based on lattices in the standard (common reference string) model. An encryption scheme is completely non-malleable if it requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti proposed two inefficient constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Recently, two efficient public-key encryption schemes have been proposed, both of them are based on pairing identity-based encryption.
Resumo:
In this paper, we propose a steganalysis method that is able to identify the locations of stego bearing pixels in the binary image. In order to do that, our proposed method will calculate the residual between a given stego image and its estimated cover image. After that, we will compute the local entropy difference between these two versions of images as well. Finally, we will compute the mean of residual and mean of local entropy difference across multiple stego images. From these two means, the locations of stego bearing pixels can be identified. The presented empirical results demonstrate that our proposed method can identify the stego bearing locations of near perfect accuracy when sufficient stego images are supplied. Hence, our proposed method can be used to reveal which pixels in the binary image have been used to carry the secret message.
Resumo:
Process models define allowed process execution scenarios. The models are usually depicted as directed graphs, with gateway nodes regulating the control flow routing logic and with edges specifying the execution order constraints between tasks. While arbitrarily structured control flow patterns in process models complicate model analysis, they also permit creativity and full expressiveness when capturing non-trivial process scenarios. This paper gives a classification of arbitrarily structured process models based on the hierarchical process model decomposition technique. We identify a structural class of models consisting of block structured patterns which, when combined, define complex execution scenarios spanning across the individual patterns. We show that complex behavior can be localized by examining structural relations of loops in hidden unstructured regions of control flow. The correctness of the behavior of process models within these regions can be validated in linear time. These observations allow us to suggest techniques for transforming hidden unstructured regions into block-structured ones.
Resumo:
Institutional graduate capabilities and discipline threshold learning outcomes require science students to demonstrate ethical conduct and social responsibility. However, neither the teaching nor the assessment of these concepts is straightforward. Australian chemistry academics participated in a workshop in 2013 to discuss and develop teaching and assessment in these areas and this paper reports on the outcomes of that workshop. Controversial issues discussed included: How broad is the mandate of the teacher, how should the boundaries between personal values and ethics be drawn, and how can ethics be assessed without moral judgement? In this position paper, I argue for a deep engagement with ethics and social justice, achieved through case studies and assessed against criteria that require discussion and debate. Strategies to effectively assess science students’ understanding of ethics and social responsibility are detailed.
Resumo:
This paper reports a number of findings from the Interests and Recruitment in Science (IRIS) study carried out in Australia in 2011. The findings concern the perceptions of first year university students in science, technology and engineering courses about the influence of museums/science centres and outreach activities on their choice of course. The study found that STE students in general tended to rate museums/science centres as more important in their decisions than outreach activities. However, a closer examination showed that females in engineering courses were significantly more inclined to rate outreach activities as important than were males in engineering courses or females in other courses. The implications of this finding for strategies to encourage more young women into engineering are discussed.
Resumo:
In this paper we describe the approaches adopted to generate the runs submitted to ImageCLEFPhoto 2009 with an aim to promote document diversity in the rankings. Four of our runs are text based approaches that employ textual statistics extracted from the captions of images, i.e. MMR [1] as a state of the art method for result diversification, two approaches that combine relevance information and clustering techniques, and an instantiation of Quantum Probability Ranking Principle. The fifth run exploits visual features of the provided images to re-rank the initial results by means of Factor Analysis. The results reveal that our methods based on only text captions consistently improve the performance of the respective baselines, while the approach that combines visual features with textual statistics shows lower levels of improvements.
Resumo:
The present study was conducted to investigate whether ob- servers are equally prone to overlook any kinds of visual events in change blindness. Capitalizing on the finding from visual search studies that abrupt appearance of an object effectively captures observers' attention, the onset of a new object and the offset of an existing object were contrasted regarding their detectability when they occurred in a naturalistic scene. In an experiment, participants viewed a series of photograph pairs in which layouts of seven or eight objects were depicted. One object either appeared in or disappeared from the layout, and participants tried to detect this change. Results showed that onsets were detected more quickly than offsets, while they were detected with equivalent ac- curacy. This suggests that the primacy of onset over offset is a robust phenomenon that likely makes onsets more resistant to change blindness under natural viewing conditions.
Resumo:
Cheating detection in linear secret sharing is considered. The model of cheating extends the Tompa-Woll attack and includes cheating during multiple (unsuccessful) recovery of the secret. It is shown that shares in most linear schemes can be split into subshares. Subshares can be used by participants to trade perfectness of the scheme with cheating prevention. Evaluation of cheating prevention is given in the context of different strategies applied by cheaters.
Resumo:
Over about the last decade, people involved in game development have noted the need for more formal models and tools to support the design phase of games. It is argued that the present lack of such formal tools is currently hindering knowledge transfer among designers. Formal visual languages, on the other hand, can help to more effectively express, abstract and communicate game design concepts. Moreover, formal tools can assist in the prototyping phase, allowing designers to reason about and simulate game mechanics on an abstract level. In this paper we present an initial investigation into whether workflow patterns – which have already proven to be effective for modeling business processes – are a suitable way to model task succession in games. Our preliminary results suggest that workflow patterns show promise in this regard but some limitations, especially in regard to time constraints, currently restrict their potential.