896 resultados para secret shopping


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a new blind steganalytic method to detect the presence of secret messages embedded in black and white images using the steganographic techniques. We start by extracting several sets of matrix, such as run length matrix, gap length matrix and pixel difference. We also apply characteristic function on these matrices to enhance their discriminative capabilities. Then we calculate the statistics which include mean, variance, kurtosis and skewness to form our feature sets. The presented empirical works demonstrate our proposed method can effectively detect three different types of steganography. This proves the universality of our proposed method as a blind steganalysis. In addition, the experimental results show our proposed method is capable of detecting small amount of the embedded message.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Secure multi-party computation (MPC) protocols enable a set of n mutually distrusting participants P 1, ..., P n , each with their own private input x i , to compute a function Y = F(x 1, ..., x n ), such that at the end of the protocol, all participants learn the correct value of Y, while secrecy of the private inputs is maintained. Classical results in the unconditionally secure MPC indicate that in the presence of an active adversary, every function can be computed if and only if the number of corrupted participants, t a , is smaller than n/3. Relaxing the requirement of perfect secrecy and utilizing broadcast channels, one can improve this bound to t a  < n/2. All existing MPC protocols assume that uncorrupted participants are truly honest, i.e., they are not even curious in learning other participant secret inputs. Based on this assumption, some MPC protocols are designed in such a way that after elimination of all misbehaving participants, the remaining ones learn all information in the system. This is not consistent with maintaining privacy of the participant inputs. Furthermore, an improvement of the classical results given by Fitzi, Hirt, and Maurer indicates that in addition to t a actively corrupted participants, the adversary may simultaneously corrupt some participants passively. This is in contrast to the assumption that participants who are not corrupted by an active adversary are truly honest. This paper examines the privacy of MPC protocols, and introduces the notion of an omnipresent adversary, which cannot be eliminated from the protocol. The omnipresent adversary can be either a passive, an active or a mixed one. We assume that up to a minority of participants who are not corrupted by an active adversary can be corrupted passively, with the restriction that at any time, the number of corrupted participants does not exceed a predetermined threshold. We will also show that the existence of a t-resilient protocol for a group of n participants, implies the existence of a t’-private protocol for a group of n′ participants. That is, the elimination of misbehaving participants from a t-resilient protocol leads to the decomposition of the protocol. Our adversary model stipulates that a MPC protocol never operates with a set of truly honest participants (which is a more realistic scenario). Therefore, privacy of all participants who properly follow the protocol will be maintained. We present a novel disqualification protocol to avoid a loss of privacy of participants who properly follow the protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been tremendous interest in watermarking multimedia content during the past two decades, mainly for proving ownership and detecting tamper. Digital fingerprinting, that deals with identifying malicious user(s), has also received significant attention. While extensive work has been carried out in watermarking of images, other multimedia objects still have enormous research potential. Watermarking database relations is one of the several areas which demand research focus owing to the commercial implications of database theft. Recently, there has been little progress in database watermarking, with most of the watermarking schemes modeled after the irreversible database watermarking scheme proposed by Agrawal and Kiernan. Reversibility is the ability to re-generate the original (unmarked) relation from the watermarked relation using a secret key. As explained in our paper, reversible watermarking schemes provide greater security against secondary watermarking attacks, where an attacker watermarks an already marked relation in an attempt to erase the original watermark. This paper proposes an improvement over the reversible and blind watermarking scheme presented in [5], identifying and eliminating a critical problem with the previous model. Experiments showing that the average watermark detection rate is around 91% even with attacker distorting half of the attributes. The current scheme provides security against secondary watermarking attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Database watermarking has received significant research attention in the current decade. Although, almost all watermarking models have been either irreversible (the original relation cannot be restored from the watermarked relation) and/or non-blind (requiring original relation to detect the watermark in watermarked relation). This model has several disadvantages over reversible and blind watermarking (requiring only watermarked relation and secret key from which the watermark is detected and original relation is restored) including inability to identify rightful owner in case of successful secondary watermarking, inability to revert the relation to original data set (required in high precision industries) and requirement to store unmarked relation at a secure secondary storage. To overcome these problems, we propose a watermarking scheme that is reversible as well as blind. We utilize difference expansion on integers to achieve reversibility. The major advantages provided by our scheme are reversibility to high quality original data set, rightful owner identification, resistance against secondary watermarking attacks, and no need to store original database at a secure secondary storage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by the need of private set operations in a distributed environment, we extend the two-party private matching problem proposed by Freedman, Nissim and Pinkas (FNP) at Eurocrypt’04 to the distributed setting. By using a secret sharing scheme, we provide a distributed solution of the FNP private matching called the distributed private matching. In our distributed private matching scheme, we use a polynomial to represent one party’s dataset as in FNP and then distribute the polynomial to multiple servers. We extend our solution to the distributed set intersection and the cardinality of the intersection, and further we show how to apply the distributed private matching in order to compute distributed subset relation. Our work extends the primitives of private matching and set intersection by Freedman et al. Our distributed construction might be of great value when the dataset is outsourced and its privacy is the main concern. In such cases, our distributed solutions keep the utility of those set operations while the dataset privacy is not compromised. Comparing with previous works, we achieve a more efficient solution in terms of computation. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Security models for two-party authenticated key exchange (AKE) protocols have developed over time to provide security even when the adversary learns certain secret keys. In this work, we advance the modelling of AKE protocols by considering more granular, continuous leakage of long-term secrets of protocol participants: the adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated, with limits on the amount of leakage per query but no bounds on the total leakage. We present a security model supporting continuous leakage even when the adversary learns certain ephemeral secrets or session keys, and give a generic construction of a two-pass leakage-resilient key exchange protocol that is secure in the model; our protocol achieves continuous, after-the-fact leakage resilience with not much more cost than a previous protocol with only bounded, non-after-the-fact leakage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A key derivation function (KDF) is a function that transforms secret non-uniformly random source material together with some public strings into one or more cryptographic keys. These cryptographic keys are used with a cryptographic algorithm for protecting electronic data during both transmission over insecure channels and storage. In this thesis, we propose a new method for constructing a generic stream cipher based key derivation function. We show that our proposed key derivation function based on stream ciphers is secure if the under-lying stream cipher is secure. We simulate instances of this stream cipher based key derivation function using three eStream nalist: Trivium, Sosemanuk and Rabbit. The simulation results show these stream cipher based key derivation functions offer efficiency advantages over the more commonly used key derivation functions based on block ciphers and hash functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is currently some debate about whether the energy expenditure of domestic tasks is sufficient to confer health benefits. The aim of this study was therefore to measure the energy cost of five activities commonly undertaken by mothers of young children. Seven women with at least one child younger than five years of age spent 15 minutes in each of the following activities: sitting quietly, vacuum cleaning, washing windows, walking at moderate pace (approx 5km/hour), walking with a stroller and grocery shopping in a super-market. Each of the six 'trials' was completed on the same day, in random order. A carefully calibrated portable gas analyser was used to measure oxygen uptake during each activity, and data were converted to units of energy expenditure (METS). Vacuum cleaning, washing windows and walking with and without a stroller were found to be 'moderate intensity activities' (3 to 6 METs), but supermarket shopping did not reach this criterion. The MET values for these activities were similar to those reported in the Compendium of Physical Activities (Ainsworth et al., 2000). However, the energy expenditures of walking, both with and without a stroller, were higher than those reported in the Compendium. The findings suggest that some of the tasks associated with domestic caring duties are conducted at an intensity which is sufficient to confer some health benefit. Such benefits will only accrue however if the daily duration of these activities is sufficient to meet current guidelines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Security protocols are designed in order to provide security properties (goals). They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy (confidentiality), authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets (CPN). This research analyses two security properties that are defined in a protocol that is based on trusted platform module (TPM). The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The changing and challenging conditions of the 21st century have been significantly impacting our economy, society and built and natural environments. Today generation of knowledge—mostly in the form of technology and innovation—is seen as a panacea for the adaptation to changes and management of challenges (Yigitcanlar, 2010a). Making space and place that concentrate on knowledge generation, thus, has become a priority for many nations (van Winden, 2010). Along with this movement, concepts like knowledge cities and knowledge precincts are coined as places where citizenship undertakes a deliberate and systematic initiative for founding its development on the identification and sustainable balance of its shared value system, and bases its ability to create wealth on its capacity to generate and leverage its knowledge capabilities (Carrillo, 2006; Yigitcanlar, 2008a). In recent years, the term knowledge precinct (Hu & Chang, 2005) in its most contemporary interpretation evolved into knowledge community precinct (KCP). KCP is a mixed-use post-modern urban setting—e.g., flexible, decontextualized, enclaved, fragmented—including a critical mass of knowledge enterprises and advanced networked infrastructures, developed with the aim of collecting the benefits of blurring the boundaries of living, shopping, recreation and working facilities of knowledge workers and their families. KCPs are the critical building blocks of knowledge cities, and thus, building successful KCPs significantly contributes to the formation of prosperous knowledge cities. In the literature this type of development—a place containing economic prosperity, environmental sustainability, just socio‐spatial order and good governance—is referred as knowledge-based urban development (KBUD). This chapter aims to provide a conceptual understanding on KBUD and its contribution to the building of KCPs that supports the formation of prosperous knowledge cities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops a semiparametric estimation approach for mixed count regression models based on series expansion for the unknown density of the unobserved heterogeneity. We use the generalized Laguerre series expansion around a gamma baseline density to model unobserved heterogeneity in a Poisson mixture model. We establish the consistency of the estimator and present a computational strategy to implement the proposed estimation techniques in the standard count model as well as in truncated, censored, and zero-inflated count regression models. Monte Carlo evidence shows that the finite sample behavior of the estimator is quite good. The paper applies the method to a model of individual shopping behavior. © 1999 Elsevier Science S.A. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a text watermarking scheme that embeds a bitstream watermark Wi in a text document P preserving the meaning, context, and flow of the document. The document is viewed as a set of paragraphs, each paragraph being a set of sentences. The sequence of paragraphs and sentences used to embed watermark bits is permuted using a secret key. Then, English language sentence transformations are used to modify sentence lengths, thus embedding watermarking bits in the Least Significant Bits (LSB) of the sentences’ cardinalities. The embedding and extracting algorithms are public, while the secrecy and security of the watermark depends on a secret key K. The probability of False Positives is extremely small, hence avoiding incidental occurrences of our watermark in random text documents. Majority voting provides security against text addition, deletion, and swapping attacks, further reducing the probability of False Positives. The scheme is secure against the general attacks on text watermarks such as reproduction (photocopying, FAX), reformatting, synonym substitution, text addition, text deletion, text swapping, paragraph shuffling and collusion attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A set system (X, F ) with X= {x 1,...,x m}) and F = {B1...,B n }, where B i ⊆ X, is called an (n, m) cover-free set system (or CF set system) if for any 1 ≤ i, j, k ≤ n and j ≠ k, |B i >2 |B j ∩ B k | +1. In this paper, we show that CF set systems can be used to construct anonymous membership broadcast schemes (or AMB schemes), allowing a center to broadcast a secret identity among a set of users in a such way that the users can verify whether or not the broadcast message contains their valid identity. Our goal is to construct (n, m) CF set systems in which for given m the value n is as large as possible. We give two constructions for CF set systems, the first one from error-correcting codes and the other from combinatorial designs. We link CF set systems to the concept of cover-free family studied by Erdös et al in early 80’s to derive bounds on parameters of CF set systems. We also discuss some possible extensions of the current work, motivated by different application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A well-known attack on RSA with low secret-exponent d was given by Wiener about 15 years ago. Wiener showed that using continued fractions, one can efficiently recover the secret-exponent d from the public key (N,e) as long as d < N 1/4. Interestingly, Wiener stated that his attack may sometimes also work when d is slightly larger than N 1/4. This raises the question of how much larger d can be: could the attack work with non-negligible probability for d=N 1/4 + ρ for some constant ρ > 0? We answer this question in the negative by proving a converse to Wiener’s result. Our result shows that, for any fixed ε > 0 and all sufficiently large modulus lengths, Wiener’s attack succeeds with negligible probability over a random choice of d < N δ (in an interval of size Ω(N δ )) as soon as δ > 1/4 + ε. Thus Wiener’s success bound d 1/4. The known attacks in this class (by Verheul and Van Tilborg and Dujella) run in exponential time, so it is natural to ask whether there exists an attack in this class with subexponential run-time. Our second converse result answers this question also in the negative.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We observe that MDS codes have interesting properties that can be used to construct ideal threshold schemes. These schemes permit the combiner to detect cheating, identify cheaters and recover the correct secret. The construction is later generalised so the resulting secret sharing is resistant against the Tompa-Woll cheating.