990 resultados para distributed computation
Resumo:
We study the natural problem of secure n-party computation (in the computationally unbounded attack model) of circuits over an arbitrary finite non-Abelian group (G,⋅), which we call G-circuits. Besides its intrinsic interest, this problem is also motivating by a completeness result of Barrington, stating that such protocols can be applied for general secure computation of arbitrary functions. For flexibility, we are interested in protocols which only require black-box access to the group G (i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element). Our investigations focus on the passive adversarial model, where up to t of the n participating parties are corrupted.
Resumo:
Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.
Resumo:
Bundle adjustment is one of the essential components of the computer vision toolbox. This paper revisits the resection-intersection approach, which has previously been shown to have inferior convergence properties. Modifications are proposed that greatly improve the performance of this method, resulting in a fast and accurate approach. Firstly, a linear triangulation step is added to the intersection stage, yielding higher accuracy and improved convergence rate. Secondly, the effect of parameter updates is tracked in order to reduce wasteful computation; only variables coupled to significantly changing variables are updated. This leads to significant improvements in computation time, at the cost of a small, controllable increase in error. Loop closures are handled effectively without the need for additional network modelling. The proposed approach is shown experimentally to yield comparable accuracy to a full sparse bundle adjustment (20% error increase) while computation time scales much better with the number of variables. Experiments on a progressive reconstruction system show the proposed method to be more efficient by a factor of 65 to 177, and 4.5 times more accurate (increasing over time) than a localised sparse bundle adjustment approach.
Resumo:
Most previous work on unconditionally secure multiparty computation has focused on computing over a finite field (or ring). Multiparty computation over other algebraic structures has not received much attention, but is an interesting topic whose study may provide new and improved tools for certain applications. At CRYPTO 2007, Desmedt et al introduced a construction for a passive-secure multiparty multiplication protocol for black-box groups, reducing it to a certain graph coloring problem, leaving as an open problem to achieve security against active attacks. We present the first n-party protocol for unconditionally secure multiparty computation over a black-box group which is secure under an active attack model, tolerating any adversary structure Δ satisfying the Q 3 property (in which no union of three subsets from Δ covers the whole player set), which is known to be necessary for achieving security in the active setting. Our protocol uses Maurer’s Verifiable Secret Sharing (VSS) but preserves the essential simplicity of the graph-based approach of Desmedt et al, which avoids each shareholder having to rerun the full VSS protocol after each local computation. A corollary of our result is a new active-secure protocol for general multiparty computation of an arbitrary Boolean circuit.
Resumo:
Motivation Awareness is an integral part of remote collaborative work and has been an important theme within the CSCW research. Our project aims at understanding and mediating non-verbal cues between remote participants involved in a design project. Research approach Within the AMIDA project we focus on distributed 'cooperative design' teams. We especially focus on the 'material' signals - signals in which people communicate through material artefacts, locations and their embodied actions. We apply an ethnographic approach to understand the role of physical artefacts in co-located naturalistic design setting. Based on the results we will generate important implications to support remote design work. We plan to develop a mixed-reality interface supported by a shared awareness display. This awareness display will provide information about the activities happening in the design room to remotely located participants. Findings/Design Our preliminary investigation with real-world design teams suggests that both the materiality of designers' work settings and their social practices play an important role in understanding these material signals that are at play. Originality/Value Most research supporting computer mediated communication have focused on either face-to-face or linguistically oriented communication paradigms. Our research focuses on mediating the non-verbal, material cues for supporting collaborative activities without impoverishing what designers do in their day to day working lives. Take away message An ethnographic approach allows us to understand the naturalistic practices of design teams, which can lead to designing effective technologies to support group work. In that respect, the findings of our research will have a generic value beyond the application domain chosen (design teams).
Resumo:
In this chapter we continue the exposition of crypto topics that was begun in the previous chapter. This chapter covers secret sharing, threshold cryptography, signature schemes, and finally quantum key distribution and quantum cryptography. As in the previous chapter, we have focused only on the essentials of each topic. We have selected in the bibliography a list of representative items, which can be consulted for further details. First we give a synopsis of the topics that are discussed in this chapter. Secret sharing is concerned with the problem of how to distribute a secret among a group of participating individuals, or entities, so that only predesignated collections of individuals are able to recreate the secret by collectively combining the parts of the secret that were allocated to them. There are numerous applications of secret-sharing schemes in practice. One example of secret sharing occurs in banking. For instance, the combination to a vault may be distributed in such a way that only specified collections of employees can open the vault by pooling their portions of the combination. In this way the authority to initiate an action, e.g., the opening of a bank vault, is divided for the purposes of providing security and for added functionality, such as auditing, if required. Threshold cryptography is a relatively recently studied area of cryptography. It deals with situations where the authority to initiate or perform cryptographic operations is distributed among a group of individuals. Many of the standard operations of single-user cryptography have counterparts in threshold cryptography. Signature schemes deal with the problem of generating and verifying electronic) signatures for documents.Asubclass of signature schemes is concerned with the shared-generation and the sharedverification of signatures, where a collaborating group of individuals are required to perform these actions. A new paradigm of security has recently been introduced into cryptography with the emergence of the ideas of quantum key distribution and quantum cryptography. While classical cryptography employs various mathematical techniques to restrict eavesdroppers from learning the contents of encrypted messages, in quantum cryptography the information is protected by the laws of physics.
Resumo:
Secure multi-party computation (MPC) protocols enable a set of n mutually distrusting participants P 1, ..., P n , each with their own private input x i , to compute a function Y = F(x 1, ..., x n ), such that at the end of the protocol, all participants learn the correct value of Y, while secrecy of the private inputs is maintained. Classical results in the unconditionally secure MPC indicate that in the presence of an active adversary, every function can be computed if and only if the number of corrupted participants, t a , is smaller than n/3. Relaxing the requirement of perfect secrecy and utilizing broadcast channels, one can improve this bound to t a < n/2. All existing MPC protocols assume that uncorrupted participants are truly honest, i.e., they are not even curious in learning other participant secret inputs. Based on this assumption, some MPC protocols are designed in such a way that after elimination of all misbehaving participants, the remaining ones learn all information in the system. This is not consistent with maintaining privacy of the participant inputs. Furthermore, an improvement of the classical results given by Fitzi, Hirt, and Maurer indicates that in addition to t a actively corrupted participants, the adversary may simultaneously corrupt some participants passively. This is in contrast to the assumption that participants who are not corrupted by an active adversary are truly honest. This paper examines the privacy of MPC protocols, and introduces the notion of an omnipresent adversary, which cannot be eliminated from the protocol. The omnipresent adversary can be either a passive, an active or a mixed one. We assume that up to a minority of participants who are not corrupted by an active adversary can be corrupted passively, with the restriction that at any time, the number of corrupted participants does not exceed a predetermined threshold. We will also show that the existence of a t-resilient protocol for a group of n participants, implies the existence of a t’-private protocol for a group of n′ participants. That is, the elimination of misbehaving participants from a t-resilient protocol leads to the decomposition of the protocol. Our adversary model stipulates that a MPC protocol never operates with a set of truly honest participants (which is a more realistic scenario). Therefore, privacy of all participants who properly follow the protocol will be maintained. We present a novel disqualification protocol to avoid a loss of privacy of participants who properly follow the protocol.
Resumo:
The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generalization of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics. Also, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement. The comparison results show that the computation using our mapper/reducer placement is much cheaper while still satisfying the computation deadline.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NPcomplete. Thus, in this paper we propose a new grouping genetic algorithm for the mappers/reducers placement problem in cloud computing. Compared with the original one, our grouping genetic algorithm uses an innovative coding scheme and also eliminates the inversion operator which is an essential operator in the original grouping genetic algorithm. The new grouping genetic algorithm is evaluated by experiments and the experimental results show that it is much more efficient than four popular algorithms for the problem, including the original grouping genetic algorithm.
Resumo:
A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled – replicated or deleted, to accommodate the user’s load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource underutilisation. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem’s knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.
Resumo:
Suppose two parties, holding vectors A = (a 1,a 2,...,a n ) and B = (b 1,b 2,...,b n ) respectively, wish to know whether a i > b i for all i, without disclosing any private input. This problem is called the vector dominance problem, and is closely related to the well-studied problem for securely comparing two numbers (Yao’s millionaires problem). In this paper, we propose several protocols for this problem, which improve upon existing protocols on round complexity or communication/computation complexity.
Resumo:
A comparison of relay power minimisation subject to received signal-to-noise ratio (SNR) at the receiver and SNR maximisation subject to the total transmitted power of relays for a typical wireless network with distributed beamforming is presented. It is desirable to maximise receiver quality-of-service (QoS) and also to minimise the cost of transmission in terms of power. Hence, these two optimisation problems are very common and have been addressed separately in the literature. It is shown that SNR maximisation subject to power constraint and power minimisation subject to SNR constraint yield the same results for a typical wireless network. It proves that either one of the optimisation approaches is sufficient.
Resumo:
Murine models with modified gene function as a result of N-ethyl-N-nitrosourea (ENU) mutagenesis have been used to study phenotypes resulting from genetic change. This study investigated genetic factors associated with red blood cell (RBC) physiology and structural integrity that may impact on blood component storage and transfusion outcome. Forward and reverse genetic approaches were employed with pedigrees of ENU-treated mice using a homozygous recessive breeding strategy. In a “forward genetic” approach, pedigree selection was based upon identification of an altered phenotype followed by exome sequencing to identify a causative mutation. In a second strategy, a “reverse genetic” approach based on selection of pedigrees with mutations in genes of interest was utilised and, following breeding to homozygosity, phenotype assessed. Thirty-three pedigrees were screened by the forward genetic approach. One pedigree demonstrated reticulocytosis, microcytic anaemia and thrombocytosis. Exome sequencing revealed a novel single nucleotide variation (SNV) in Ank1 encoding the RBC structural protein ankyrin-1 and the pedigree was designated Ank1EX34. The reticulocytosis and microcytic anaemia observed in the Ank1EX34 pedigree were similar to clinical features of hereditary spherocytosis in humans. For the reverse genetic approach three pedigrees with different point mutations in Spnb1 encoding RBC protein spectrin-1β, and one pedigree with a mutation in Epb4.1, encoding band 4.1 were selected for study. When bred to homozygosity two of the spectrin-1β pedigrees (a, b) demonstrated increased RBC count, haemoglobin (Hb) and haematocrit (HCT). The third Spnb1 mutation (spectrin-1β c) and mutation in Epb4.1 (band 4.1) did not significantly affect the haematological phenotype, despite these two mutations having a PolyPhen score predicting the mutation may be damaging. Exome sequencing allows rapid identification of causative mutations and development of databases of mutations predicted to be disruptive. These tools require further refinement but provide new approaches to the study of genetically defined changes that may impact on blood component storage and transfusion outcome.
Resumo:
Voltage rise is the main issue which limits the capacity of Low Voltage (LV) network to accommodate more Renewable Energy (RE) sources. In addition, voltage drop at peak load period is a significant power quality concern. This paper proposes a new robust voltage support strategy based on distributed coordination of multiple distribution static synchronous compensators (DSTATCOMs). The study focuses on LV networks with PV as the RE source for customers. The proposed approach applied to a typical LV network and its advantages are shown comparing with other voltage control strategies.
Resumo:
Design of a series-connected photovoltaic generator (SPVG) capable of enhancing power quality is investigated. Analysis of the SPVG operations under disturbance conditions shows explicitly how achievable network voltage quality is affected by the SPVG injected power and its apparent power rating, and that voltage quality can be significantly improved even with a modest level of energy storage capacity incorporated in the SPVG. A control system for the SPVG is also proposed. Both simulation and laboratory tests confirm the efficacy of the distributed generator system.