957 resultados para Quadratic polynomial
Resumo:
The increasing growth in the use of Hardware Security Modules (HSMs) towards identification and authentication of a security endpoint have raised numerous privacy and security concerns. HSMs have the ability to tie a system or an object, along with its users to the physical world. However, this enables tracking of the user and/or an object associated with the HSM. Current systems do not adequately address the privacy needs and as such are susceptible to various attacks. In this work, we analyse various security and privacy concerns that arise when deploying such hardware security modules and propose a system that allow users to create pseudonyms from a trusted master public-secret key pair. The proposed system is based on the intractability of factoring and finding square roots of a quadratic residue modulo a composite number, where the composite number is a product of two large primes. Along with the standard notion of protecting privacy of an user, the proposed system offers colligation between seemingly independent pseudonyms. This new property when combined with HSMs that store the master secret key is extremely beneficial to a user, as it offers a convenient way to generate a large number of pseudonyms using relatively small storage requirements.
Resumo:
A pseudonym provides anonymity by protecting the identity of a legitimate user. A user with a pseudonym can interact with an unknown entity and be confident that his/her identity is secret even if the other entity is dishonest. In this work, we present a system that allows users to create pseudonyms from a trusted master public-secret key pair. The proposed system is based on the intractability of factoring and finding square roots of a quadratic residue modulo a composite number, where the composite number is a product of two large primes. Our proposal is different from previously published pseudonym systems, as in addition to standard notion of protecting privacy of an user, our system offers colligation between seemingly independent pseudonyms. This new property when combined with a trusted platform that stores a master secret key is extremely beneficial to an user as it offers a convenient way to generate a large number of pseudonyms using relatively small storage.
Resumo:
Motivated by the need of private set operations in a distributed environment, we extend the two-party private matching problem proposed by Freedman, Nissim and Pinkas (FNP) at Eurocrypt’04 to the distributed setting. By using a secret sharing scheme, we provide a distributed solution of the FNP private matching called the distributed private matching. In our distributed private matching scheme, we use a polynomial to represent one party’s dataset as in FNP and then distribute the polynomial to multiple servers. We extend our solution to the distributed set intersection and the cardinality of the intersection, and further we show how to apply the distributed private matching in order to compute distributed subset relation. Our work extends the primitives of private matching and set intersection by Freedman et al. Our distributed construction might be of great value when the dataset is outsourced and its privacy is the main concern. In such cases, our distributed solutions keep the utility of those set operations while the dataset privacy is not compromised. Comparing with previous works, we achieve a more efficient solution in terms of computation. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
This paper presents a method for the estimation of thrust model parameters of uninhabited airborne systems using specific flight tests. Particular tests are proposed to simplify the estimation. The proposed estimation method is based on three steps. The first step uses a regression model in which the thrust is assumed constant. This allows us to obtain biased initial estimates of the aerodynamic coeficients of the surge model. In the second step, a robust nonlinear state estimator is implemented using the initial parameter estimates, and the model is augmented by considering the thrust as random walk. In the third step, the estimate of the thrust obtained by the observer is used to fit a polynomial model in terms of the propeller advanced ratio. We consider a numerical example based on Monte-Carlo simulations to quantify the sampling properties of the proposed estimator given realistic flight conditions.
Resumo:
The relationship between temperature and mortality is non-linear and the effect estimates depend on the threshold temperatures selected. However, little is known about whether threshold temperatures differ with age or cause of deaths in the Southern Hemisphere. We conducted polynomial distributed lag non-linear models to assess the threshold temperatures for mortality from all ages (Dall), aged from 15 to 64 (D15-64), 65- 84(D65-84), ≥85 years (D85+), respiratory (RD) and cardiovascular diseases (CVD) in Brisbane, Australia, 1996–2004. We examined both hot and cold thresholds, and the lags of up to 15 days for cold effects and 3 days for hot effects. Results show that for the current day, the cold threshold was 20°C and the hot threshold was 28°C for the groups of Dall, D15-64 and D85+. The cold threshold was higher (23°C) for the group of D65-84 and lower (21°C) for the group of CVD. The hot threshold was higher (29°C) for the group of D65-84 and lower (27°C) for the group of RD. Compared to the current day, for the cold effects of up to 15-day lags, the threshold was lower for the group of D15-64, and the thresholds were higher for the groups of D65-84, D85+, RD and CVD; while for the hot effects of 3-day lags, the threshold was higher for the group of D15-64 and the thresholds were lower for the groups of D65-84 and RD. Temperature thresholds appeared to differ with age and death categories. The elderly and deaths from RD and CVD were more sensitive to temperature stress than the adult group. These findings may have implications in the assessment of temperature-related mortality and development of weather/health warning systems.
Resumo:
Information and communications technologies are a significant component of the healthcare domain, and electronic health records play a major role in it. Therefore, it is important that they are accepted en masse by healthcare professionals. How healthcare professionals perceive the usefulness of electronic health records and their attitudes towards them have been shown to have significant effects on the overall acceptance in many healthcare systems around the world. This paper investigates the role of perceived usefulness and attitude on the intention to use electronic health records by future healthcare professionals using polynomial regression with response surface analysis. Results show that the relationships between these variables are more complex than predicted in prior research. The paper concludes that the properties of the above determinants must be further investigated to clearly understand: (i) their role in predicting the intention to use electronic health records; and (ii) in designing systems that are better adopted by healthcare professionals of the future.
Resumo:
Digital signatures are often used by trusted authorities to make unique bindings between a subject and a digital object; for example, certificate authorities certify a public key belongs to a domain name, and time-stamping authorities certify that a certain piece of information existed at a certain time. Traditional digital signature schemes however impose no uniqueness conditions, so a trusted authority could make multiple certifications for the same subject but different objects, be it intentionally, by accident, or following a (legal or illegal) coercion. We propose the notion of a double-authentication-preventing signature, in which a value to be signed is split into two parts: a subject and a message. If a signer ever signs two different messages for the same subject, enough information is revealed to allow anyone to compute valid signatures on behalf of the signer. This double-signature forgeability property discourages signers from misbehaving---a form of self-enforcement---and would give binding authorities like CAs some cryptographic arguments to resist legal coercion. We give a generic construction using a new type of trapdoor functions with extractability properties, which we show can be instantiated using the group of sign-agnostic quadratic residues modulo a Blum integer.
Resumo:
We first classify the state-of-the-art stream authentication problem in the multicast environment and group them into Signing and MAC approaches. A new approach for authenticating digital streams using Threshold Techniques is introduced. The new approach main advantages are in tolerating packet loss, up to a threshold number, and having a minimum space overhead. It is most suitable for multicast applications running over lossy, unreliable communication channels while, in same time, are pertain the security requirements. We use linear equations based on Lagrange polynomial interpolation and Combinatorial Design methods.
Resumo:
Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small S-boxes interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds N r r. In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties). We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt’00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly with the number of rounds. The XSL attack would then be polynomial (or subexponential) in N r> , with a huge constant that is double-exponential in the size of the S-box. The exact complexity of such attacks is not known due to the redundant equations. Though the presented version of the XSL attack always gives always more than the exhaustive search for Rijndael, it seems to (marginally) break 256-bit Serpent. We suggest a new criterion for design of S-boxes in block ciphers: they should not be describable by a system of polynomial equations that is too small or too overdefined.
Resumo:
We present a distinguishing attack against SOBER-128 with linear masking. We found a linear approximation which has a bias of 2^− − 8.8 for the non-linear filter. The attack applies the observation made by Ekdahl and Johansson that there is a sequence of clocks for which the linear combination of some states vanishes. This linear dependency allows that the linear masking method can be applied. We also show that the bias of the distinguisher can be improved (or estimated more precisely) by considering quadratic terms of the approximation. The probability bias of the quadratic approximation used in the distinguisher is estimated to be equal to O(2^− − 51.8), so that we claim that SOBER-128 is distinguishable from truly random cipher by observing O(2^103.6) keystream words.
Resumo:
We study the multicast stream authentication problem when an opponent can drop, reorder and inject data packets into the communication channel. In this context, bandwidth limitation and fast authentication are the core concerns. Therefore any authentication scheme is to reduce as much as possible the packet overhead and the time spent at the receiver to check the authenticity of collected elements. Recently, Tartary and Wang developed a provably secure protocol with small packet overhead and a reduced number of signature verifications to be performed at the receiver. In this paper, we propose an hybrid scheme based on Tartary and Wang’s approach and Merkle hash trees. Our construction will exhibit a smaller overhead and a much faster processing at the receiver making it even more suitable for multicast than the earlier approach. As Tartary and Wang’s protocol, our construction is provably secure and allows the total recovery of the data stream despite erasures and injections occurred during transmission.
Resumo:
Purpose This study explores recent claims that humans exhibit a minimum cost of transport (CoTmin) for running which occurs at an intermediate speed, and assesses individual physiological, gait and training characteristics. Methods Twelve healthy participants with varying levels of fitness and running experience ran on a treadmill at six self-selected speeds in a discontinuous protocol over three sessions. Running speed (km[middle dot]hr-1), V[spacing dot above]O2 (mL[middle dot]kg-1[middle dot]km-1), CoT (kcal[middle dot]km-1), heart rate (beats[middle dot]min-1) and cadence (steps[middle dot]min-1) were continuously measured. V[spacing dot above]O2 max was measured on a fourth testing session. The occurrence of a CoTmin was investigated and its presence or absence examined with respect to fitness, gait and training characteristics. Results Five participants showed a clear CoTmin at an intermediate speed and a statistically significant (p < 0.05) quadratic CoT-speed function, while the other participants did not show such evidence. Participants were then categorized and compared with respect to the strength of evidence for a CoTmin (ClearCoTmin and NoCoTmin). The ClearCoTmin group displayed significantly higher correlation between speed and cadence; more endurance training and exercise sessions per week; than the NoCoTmin group; and a marginally non-significant but higher aerobic capacity. Some runners still showed a CoTmin at an intermediate speed even after subtraction of resting energy expenditure. Conclusion The findings confirm the existence of an optimal speed for human running, in some but not all participants. Those exhibiting a COTmin undertook a higher volume of running, ran with a cadence that was more consistently modulated with speed, and tended to be aerobically fitter. The ability to minimise the energetic cost of transport appears not to be ubiquitous feature of human running but may emerge in some individuals with extensive running experience.
Resumo:
An anonymous membership broadcast scheme is a method in which a sender broadcasts the secret identity of one out of a set of n receivers, in such a way that only the right receiver knows that he is the intended receiver, while the others can not determine any information about this identity (except that they know that they are not the intended ones). In a w-anonymous membership broadcast scheme no coalition of up to w receivers, not containing the selected receiver, is able to determine any information about the identity of the selected receiver. We present two new constructions of w-anonymous membership broadcast schemes. The first construction is based on error-correcting codes and we show that there exist schemes that allow a flexible choice of w while keeping the complexities for broadcast communication, user storage and required randomness polynomial in log n,. The second construction is based on the concept of collision-free arrays, which is introduced in this paper. The construction results in more flexible schemes, allowing trade-offs between different complexities.
Resumo:
Firm-customer digital connectedness for effective sensing and responding is a strategic imperative for contemporary competitive firms. This research-in-progress paper conceptualizes and operationalizes the firm-customer mobile digital connectedness of a smart-mobile customer. The empirical investigation focuses on mobile app users and the impact of mobile apps on customer expectations. Based on pilot data collected from 127 customers, we tested hypotheses pertaining to firm-customer mobile digital connectedness and customer expectations. Our test analysis using linear and non-linear postulations reveals those customers raise their expectations as they increase their digital interactions with a firm.