381 resultados para secure protocal
Resumo:
Most previous work on unconditionally secure multiparty computation has focused on computing over a finite field (or ring). Multiparty computation over other algebraic structures has not received much attention, but is an interesting topic whose study may provide new and improved tools for certain applications. At CRYPTO 2007, Desmedt et al introduced a construction for a passive-secure multiparty multiplication protocol for black-box groups, reducing it to a certain graph coloring problem, leaving as an open problem to achieve security against active attacks. We present the first n-party protocol for unconditionally secure multiparty computation over a black-box group which is secure under an active attack model, tolerating any adversary structure Δ satisfying the Q 3 property (in which no union of three subsets from Δ covers the whole player set), which is known to be necessary for achieving security in the active setting. Our protocol uses Maurer’s Verifiable Secret Sharing (VSS) but preserves the essential simplicity of the graph-based approach of Desmedt et al, which avoids each shareholder having to rerun the full VSS protocol after each local computation. A corollary of our result is a new active-secure protocol for general multiparty computation of an arbitrary Boolean circuit.
Resumo:
NTRUEncrypt is a fast and practical lattice-based public-key encryption scheme, which has been standardized by IEEE, but until recently, its security analysis relied only on heuristic arguments. Recently, Stehlé and Steinfeld showed that a slight variant (that we call pNE) could be proven to be secure under chosen-plaintext attack (IND-CPA), assuming the hardness of worst-case problems in ideal lattices. We present a variant of pNE called NTRUCCA, that is IND-CCA2 secure in the standard model assuming the hardness of worst-case problems in ideal lattices, and only incurs a constant factor overhead in ciphertext and key length over the pNE scheme. To our knowledge, our result gives the first IND-CCA2 secure variant of NTRUEncrypt in the standard model, based on standard cryptographic assumptions. As an intermediate step, we present a construction for an All-But-One (ABO) lossy trapdoor function from pNE, which may be of independent interest. Our scheme uses the lossy trapdoor function framework of Peikert and Waters, which we generalize to the case of (k − 1)-of-k-correlated input distributions.
Resumo:
At Eurocrypt’04, Freedman, Nissim and Pinkas introduced a fuzzy private matching problem. The problem is defined as follows. Given two parties, each of them having a set of vectors where each vector has T integer components, the fuzzy private matching is to securely test if each vector of one set matches any vector of another set for at least t components where t < T. In the conclusion of their paper, they asked whether it was possible to design a fuzzy private matching protocol without incurring a communication complexity with the factor (T t ) . We answer their question in the affirmative by presenting a protocol based on homomorphic encryption, combined with the novel notion of a share-hiding error-correcting secret sharing scheme, which we show how to implement with efficient decoding using interleaved Reed-Solomon codes. This scheme may be of independent interest. Our protocol is provably secure against passive adversaries, and has better efficiency than previous protocols for certain parameter values.
Resumo:
Global cereal production will need to increase by 50% to 70% to feed a world population of about 9 billion by 2050. This intensification is forecast to occur mostly in subtropical regions, where warm and humid conditions can promote high N2O losses from cropped soils. To secure high crop production without exacerbating N2O emissions, new nitrogen (N) fertiliser management strategies are necessary. This one-year study evaluated the efficacy of a nitrification inhibitor (3,4-dimethylpyrazole phosphate—DMPP) and different N fertiliser rates to reduce N2O emissions in a wheat–maize rotation in subtropical Australia. Annual N2O emissions were monitored using a fully automated greenhouse gas measuring system. Four treatments were fertilized with different rates of urea, including a control (40 kg-N ha−1 year−1), a conventional N fertiliser rate adjusted on estimated residual soil N (120 kg-N ha−1 year−1), a conventional N fertiliser rate (240 kg-N ha−1 year−1) and a conventional N fertiliser rate (240 kg-N ha−1 year−1) with nitrification inhibitor (DMPP) applied at top dressing. The maize season was by far the main contributor to annual N2O emissions due to the high soil moisture and temperature conditions, as well as the elevated N rates applied. Annual N2O emissions in the four treatments amounted to 0.49, 0.84, 2.02 and 0.74 kg N2O–N ha−1 year−1, respectively, and corresponded to emission factors of 0.29%, 0.39%, 0.69% and 0.16% of total N applied. Halving the annual conventional N fertiliser rate in the adjusted N treatment led to N2O emissions comparable to the DMPP treatment but extensively penalised maize yield. The application of DMPP produced a significant reduction in N2O emissions only in the maize season. The use of DMPP with urea at the conventional N rate reduced annual N2O emissions by more than 60% but did not affect crop yields. The results of this study indicate that: (i) future strategies aimed at securing subtropical cereal production without increasing N2O emissions should focus on the fertilisation of the summer crop; (ii) adjusting conventional N fertiliser rates on estimated residual soil N is an effective practice to reduce N2O emissions but can lead to substantial yield losses if the residual soil N is not assessed correctly; (iii) the application of DMPP is a feasible strategy to reduce annual N2O emissions from sub-tropical wheat–maize rotations. However, at the N rates tested in this study DMPP urea did not increase crop yields, making it impossible to recoup extra costs associated with this fertiliser. The findings of this study will support farmers and policy makers to define effective fertilisation strategies to reduce N2O emissions from subtropical cereal cropping systems while maintaining high crop productivity. More research is needed to assess the use of DMPP urea in terms of reducing conventional N fertiliser rates and subsequently enable a decrease of fertilisation costs and a further abatement of fertiliser-induced N2O emissions.
Resumo:
An increasing number of countries are faced with an aging population increasingly needing healthcare services. For any e-health information system, the need for increased trust by such clients with potentially little knowledge of any security scheme involved is paramount. In addition notable scalability of any system has become a critical aspect of system design, development and ongoing management. Meanwhile cryptographic systems provide the security provisions needed for confidentiality, authentication, integrity and non-repudiation. Cryptographic key management, however, must be secure, yet efficient and effective in developing an attitude of trust in system users. Digital certificate-based Public Key Infrastructure has long been the technology of choice or availability for information security/assurance; however, there appears to be a notable lack of successful implementations and deployments globally. Moreover, recent issues with associated Certificate Authority security have damaged trust in these schemes. This paper proposes the adoption of a centralised public key registry structure, a non-certificate based scheme, for large scale e-health information systems. The proposed structure removes complex certificate management, revocation and a complex certificate validation structure while maintaining overall system security. Moreover, the registry concept may be easier for both healthcare professionals and patients to understand and trust.
Resumo:
This paper makes a formal security analysis of the current Australian e-passport implementation using model checking tools CASPER/CSP/FDR. We highlight security issues in the current implementation and identify new threats when an e-passport system is integrated with an automated processing system like SmartGate. The paper also provides a security analysis of the European Union (EU) proposal for Extended Access Control (EAC) that is intended to provide improved security in protecting biometric information of the e-passport bearer. The current e-passport specification fails to provide a list of adequate security goals that could be used for security evaluation. We fill this gap; we present a collection of security goals for evaluation of e-passport protocols. Our analysis confirms existing security weaknesses that were previously identified and shows that both the Australian e-passport implementation and the EU proposal fail to address many security and privacy aspects that are paramount in implementing a secure border control mechanism. ACM Classification C.2.2 (Communication/Networking and Information Technology – Network Protocols – Model Checking), D.2.4 (Software Engineering – Software/Program Verification – Formal Methods), D.4.6 (Operating Systems – Security and Privacy Protection – Authentication)
Resumo:
In this article, we study the security of the IDEA block cipher when it is used in various simple-length or double-length hashing modes. Even though this cipher is still considered as secure, we show that one should avoid its use as internal primitive for block cipher based hashing. In particular, we are able to generate instantaneously free-start collisions for most modes, and even semi-free-start collisions, pseudo-preimages or hash collisions in practical complexity. This work shows a practical example of the gap that exists between secret-key and known or chosen-key security for block ciphers. Moreover, we also settle the 20-year-old standing open question concerning the security of the Abreast-DM and Tandem-DM double-length compression functions, originally invented to be instantiated with IDEA. Our attacks have been verified experimentally and work even for strengthened versions of IDEA with any number of rounds.
Resumo:
This special issue of Networking Science focuses on Next Generation Network (NGN) that enables the deployment of access independent services over converged fixed and mobile networks. NGN is a packet-based network and uses the Internet protocol (IP) to transport the various types of traffic (voice, video, data and signalling). NGN facilitates easy adoption of distributed computing applications by providing high speed connectivity in a converged networked environment. It also makes end user devices and applications highly intelligent and efficient by empowering them with programmability and remote configuration options. However, there are a number of important challenges in provisioning next generation network technologies in a converged communication environment. Some preliminary challenges include those that relate to QoS, switching and routing, management and control, and security which must be addressed on an urgent or emergency basis. The consideration of architectural issues in the design and pro- vision of secure services for NGN deserves special attention and hence is the main theme of this special issue.
Resumo:
We propose a new protocol providing cryptographically secure authentication to unaided humans against passive adversaries. We also propose a new generic passive attack on human identification protocols. The attack is an application of Coppersmith’s baby-step giant-step algorithm on human identification protcols. Under this attack, the achievable security of some of the best candidates for human identification protocols in the literature is further reduced. We show that our protocol preserves similar usability while achieves better security than these protocols. A comprehensive security analysis is provided which suggests parameters guaranteeing desired levels of security.
Resumo:
Secure multi-party computation (MPC) protocols enable a set of n mutually distrusting participants P 1, ..., P n , each with their own private input x i , to compute a function Y = F(x 1, ..., x n ), such that at the end of the protocol, all participants learn the correct value of Y, while secrecy of the private inputs is maintained. Classical results in the unconditionally secure MPC indicate that in the presence of an active adversary, every function can be computed if and only if the number of corrupted participants, t a , is smaller than n/3. Relaxing the requirement of perfect secrecy and utilizing broadcast channels, one can improve this bound to t a < n/2. All existing MPC protocols assume that uncorrupted participants are truly honest, i.e., they are not even curious in learning other participant secret inputs. Based on this assumption, some MPC protocols are designed in such a way that after elimination of all misbehaving participants, the remaining ones learn all information in the system. This is not consistent with maintaining privacy of the participant inputs. Furthermore, an improvement of the classical results given by Fitzi, Hirt, and Maurer indicates that in addition to t a actively corrupted participants, the adversary may simultaneously corrupt some participants passively. This is in contrast to the assumption that participants who are not corrupted by an active adversary are truly honest. This paper examines the privacy of MPC protocols, and introduces the notion of an omnipresent adversary, which cannot be eliminated from the protocol. The omnipresent adversary can be either a passive, an active or a mixed one. We assume that up to a minority of participants who are not corrupted by an active adversary can be corrupted passively, with the restriction that at any time, the number of corrupted participants does not exceed a predetermined threshold. We will also show that the existence of a t-resilient protocol for a group of n participants, implies the existence of a t’-private protocol for a group of n′ participants. That is, the elimination of misbehaving participants from a t-resilient protocol leads to the decomposition of the protocol. Our adversary model stipulates that a MPC protocol never operates with a set of truly honest participants (which is a more realistic scenario). Therefore, privacy of all participants who properly follow the protocol will be maintained. We present a novel disqualification protocol to avoid a loss of privacy of participants who properly follow the protocol.
Resumo:
Implementation of an electronic tendering (e-tendering) systems requires careful attention to the needs of the system and its various participants. Fairness in an e-tendering is of utmost importance. Current proposals and implementations do not provide fairness and thus, are vulnerable to collusion and favourism. Dishonest participants, either the principal or tenderer may collude to alter or view competing tenders which would give the favoured tenderer a greater chance of winning the contract. This paper proposes an e-tendering system that is secure and fair to all participants. We employ the techniques of anonymous token system along with signed commitment approach to achieve a publicly verifiable fair e-tendering protocol. We also provide an analysis of the protocol that confirms the security of our proposal against security goals for an e-tendering system.
Resumo:
Given the increased importance of adaptation debates in global climate negotiations, pressure to achieve biodiversity, food and water security through managed landscape-scale adaptation will likely increase across the globe over the coming decade. In parallel, emerging market-based, terrestrial greenhouse gas abatement programs present a real opportunity to secure such adaptation to climate change through enhanced landscape resilience. Australia has an opportunity to take advantage of such programs through regional planning aspects of its governance arrangements for NRM. This paper explores necessary reforms to Australia's regional NRM planning systems to ensure that they will be better able to direct the nation's emerging GGA programs to secure enhanced landscape adaptation. © 2013 Planning Institute Australia.
Resumo:
In the current market, extensive software development is taking place and the software industry is thriving. Major software giants have stated source code theft as a major threat to revenues. By inserting an identity-establishing watermark in the source code, a company can prove it's ownership over the source code. In this paper, we propose a watermarking scheme for C/C++ source codes by exploiting the language restrictions. If a function calls another function, the latter needs to be defined in the code before the former, unless one uses function pre-declarations. We embed the watermark in the code by imposing an ordering on the mutually independent functions by introducing bogus dependency. Removal of dependency by the attacker to erase the watermark requires extensive manual intervention thereby making the attack infeasible. The scheme is also secure against subtractive and additive attacks. Using our watermarking scheme, an n-bit watermark can be embedded in a program having n independent functions. The scheme is implemented on several sample codes and performance changes are analyzed.
Resumo:
Database watermarking has received significant research attention in the current decade. Although, almost all watermarking models have been either irreversible (the original relation cannot be restored from the watermarked relation) and/or non-blind (requiring original relation to detect the watermark in watermarked relation). This model has several disadvantages over reversible and blind watermarking (requiring only watermarked relation and secret key from which the watermark is detected and original relation is restored) including inability to identify rightful owner in case of successful secondary watermarking, inability to revert the relation to original data set (required in high precision industries) and requirement to store unmarked relation at a secure secondary storage. To overcome these problems, we propose a watermarking scheme that is reversible as well as blind. We utilize difference expansion on integers to achieve reversibility. The major advantages provided by our scheme are reversibility to high quality original data set, rightful owner identification, resistance against secondary watermarking attacks, and no need to store original database at a secure secondary storage.
Resumo:
We present efficient protocols for private set disjointness tests. We start from an intuition of our protocols that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the cardinality of the intersection. More specifically, it discloses its lower bound. By using the Lagrange interpolation we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. The protocol applies a verification test to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are first ones that have been designed without a generic secure function evaluation. More importantly, they are the most efficient protocols for private disjointness tests for the malicious adversary case.