973 resultados para verifiable random function
Resumo:
A fundamental problem in the analysis of structured relational data like graphs, networks, databases, and matrices is to extract a summary of the common structure underlying relations between individual entities. Relational data are typically encoded in the form of arrays; invariance to the ordering of rows and columns corresponds to exchangeable arrays. Results in probability theory due to Aldous, Hoover and Kallenberg show that exchangeable arrays can be represented in terms of a random measurable function which constitutes the natural model parameter in a Bayesian model. We obtain a flexible yet simple Bayesian nonparametric model by placing a Gaussian process prior on the parameter function. Efficient inference utilises elliptical slice sampling combined with a random sparse approximation to the Gaussian process. We demonstrate applications of the model to network data and clarify its relation to models in the literature, several of which emerge as special cases.
Resumo:
实例依赖的可验证随机函数是由文献[1]提出的一个新的密码学概念,它也是构造高安全性的零知识协议(如可重置零知识论证系统)的一个强有力的工具,而这些高安全性的零知识协议在智能卡和电子商务中有着重要的潜在价值。基于非交互ZAP证明系统和random oracle模型中∑OR-协议,给出了实例依赖的可验证伪随机函数的两个高效的实现和相应的安全性证明,提升了这一工具的应用价值。
Resumo:
In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0⩽trandom collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where fsi(·)'s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali–Sidney scheme is a special case of this general construction. We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.
Resumo:
In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0 ≤ t < u ≤ n. The idea behind the Micali-Sidney scheme is to generate and distribute secret seeds S = s1, . . . , sd of a poly-random collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where f s i (·)’s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali-Sidney scheme is a special case of this general construction.We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.
Resumo:
We explore the semi-classical structure of the Wigner functions ($\Psi $(q, p)) representing bound energy eigenstates $|\psi \rangle $ for systems with f degrees of freedom. If the classical motion is integrable, the classical limit of $\Psi $ is a delta function on the f-dimensional torus to which classical trajectories corresponding to ($|\psi \rangle $) are confined in the 2f-dimensional phase space. In the semi-classical limit of ($\Psi $ ($\hslash $) small but not zero) the delta function softens to a peak of order ($\hslash ^{-\frac{2}{3}f}$) and the torus develops fringes of a characteristic 'Airy' form. Away from the torus, $\Psi $ can have semi-classical singularities that are not delta functions; these are discussed (in full detail when f = 1) using Thom's theory of catastrophes. Brief consideration is given to problems raised when ($\Psi $) is calculated in a representation based on operators derived from angle coordinates and their conjugate momenta. When the classical motion is non-integrable, the phase space is not filled with tori and existing semi-classical methods fail. We conjecture that (a) For a given value of non-integrability parameter ($\epsilon $), the system passes through three semi-classical regimes as ($\hslash $) diminishes. (b) For states ($|\psi \rangle $) associated with regions in phase space filled with irregular trajectories, ($\Psi $) will be a random function confined near that region of the 'energy shell' explored by these trajectories (this region has more than f dimensions). (c) For ($\epsilon \neq $0, $\hslash $) blurs the infinitely fine classical path structure, in contrast to the integrable case ($\epsilon $ = 0, where $\hslash $ )imposes oscillatory quantum detail on a smooth classical path structure.
Resumo:
Randomness in the source condition other than the heterogeneity in the system parameters can also be a major source of uncertainty in the concentration field. Hence, a more general form of the problem formulation is necessary to consider randomness in both source condition and system parameters. When the source varies with time, the unsteady problem, can be solved using the unit response function. In the case of random system parameters, the response function becomes a random function and depends on the randomness in the system parameters. In the present study, the source is modelled as a random discrete process with either a fixed interval or a random interval (the Poisson process). In this study, an attempt is made to assess the relative effects of various types of source uncertainties on the probabilistic behaviour of the concentration in a porous medium while the system parameters are also modelled as random fields. Analytical expressions of mean and covariance of concentration due to random discrete source are derived in terms of mean and covariance of unit response function. The probabilistic behaviour of the random response function is obtained by using a perturbation-based stochastic finite element method (SFEM), which performs well for mild heterogeneity. The proposed method is applied for analysing both the 1-D as well as the 3-D solute transport problems. The results obtained with SFEM are compared with the Monte Carlo simulation for 1-D problems.
Resumo:
Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.
Resumo:
The coalescence of nearly rigid liquid droplets in a turbulent flow field is viewed as the drainage of a thin film of liquid under the action of a stochastic force representing the effect of turbulence. The force squeezing the drop pair is modelled as a correlated random function of time. The drops are assumed to coalesce once the film thickness becomes smaller than a critical thickness while they are regarded as separated if their distance of separation is larger than a prescribed distance. A semi-analytical solution is derived to determine the coalescence efficiency. The veracity of the solution procedure is established via a Monte-Carlo solution scheme. The model predicts a reversing trend of the dependence of the coalescence efficiency on the drop radii, the film liquid viscosity and the turbulence energy dissipation per unit mass, as the relative fluctuation increases. However, the dependence on physical parameters is weak (especially at high relative fluctuation) so that for the smallest droplets (which are nearly rigid) the coalescence efficiency may be treated as an empirical constant. The predictions of this model are compared with those of a white-noise force model. The results of this paper and those in Muralidhar and Ramkrishna (1986, Ind. Engng Chem. Fundam. 25, 554-56) suggest that dynamic drop deformation is the key factor that influences the coalescence efficiency.
Resumo:
This paper studies the long-time behavior of the empirical distribution of age and normalized position of an age-dependent supercritical branching Markov process. The motion of each individual during its life is a random function of its age. It is shown that the empirical distribution of the age and the normalized position of all individuals alive at time t converges as t -> infinity to a deterministic product measure.
Resumo:
The paper outlines a technique for sensitive measurement of conduction phenomena in liquid dielectrics. The special features of this technique are the simplicity of the electrical system, the inexpensive instrumentation and the high accuracy. Detection, separation and analysis of a random function of current that is superimposed on the prebreakdown direct current forms the basis of this investigation. In this case, prebreakdown direct current is the output data of a test cell with large electrodes immersed in a liquid medium subjected to high direct voltages. Measurement of the probability-distribution function of a random fluctuating component of current provides a method that gives insight into the mechanism of conduction in a liquid medium subjected to high voltages and the processes that are responsible for the existence of the fluctuating component of the current.
Resumo:
In this paper, we have proposed a centralized multicast authentication protocol (MAP) for dynamic multicast groups in wireless networks. In our protocol, a multicast group is defined only at the time of the multicasting. The authentication server (AS) in the network generates a session key and authenticates it to each of the members of a multicast group using the computationally inexpensive least common multiple (LCM) method. In addition, a pseudo random function (PRF) is used to bind the secret keys of the network members with their identities. By doing this, the AS is relieved from storing per member secrets in its memory, making the scheme completely storage scalable. The protocol minimizes the load on the network members by shifting the computational tasks towards the AS node as far as possible. The protocol possesses a membership revocation mechanism and is protected against replay attack and brute force attack. Analytical and simulation results confirm the effectiveness of the proposed protocol.
Resumo:
In this paper, we propose a novel authentication protocol for MANETs requiring stronger security. The protocol works on a two-tier network architecture with client nodes and authentication server nodes, and supports dynamic membership. We use an external membership granting server (MGS) to provide stronger security with dynamic membership. However, the external MGS in our protocol is semi-online instead of being online, i.e., the MGS cannot initiate a connection with a network node but any network node can communicate with the MGS whenever required. To ensure efficiency, the protocol uses symmetric key cryptography to implement the authentication service. However, to achieve storage scalability, the protocol uses a pseudo random function (PRF) to bind the secret key of a client to its identity using the secret key of its server. In addition, the protocol possesses an efficient server revocation mechanism along with an efficient server re-assignment mechanism, which makes the protocol robust against server node compromise.
Resumo:
The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finitedimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets. Copyright 2009.
Resumo:
The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finite-dimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets.