6 resultados para Authenticated Encryption

em Digital Commons at Florida International University


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In broad terms — including a thief's use of existing credit card, bank, or other accounts — the number of identity fraud victims in the United States ranges 9-10 million per year, or roughly 4% of the US adult population. The average annual theft per stolen identity was estimated at $6,383 in 2006, up approximately 22% from $5,248 in 2003; an increase in estimated total theft from $53.2 billion in 2003 to $56.6 billion in 2006. About three million Americans each year fall victim to the worst kind of identity fraud: new account fraud. Names, Social Security numbers, dates of birth, and other data are acquired fraudulently from the issuing organization, or from the victim then these data are used to create fraudulent identity documents. In turn, these are presented to other organizations as evidence of identity, used to open new lines of credit, secure loans, “flip” property, or otherwise turn a profit in a victim's name. This is much more time consuming — and typically more costly — to repair than fraudulent use of existing accounts. ^ This research borrows from well-established theoretical backgrounds, in an effort to answer the question – what is it that makes identity documents credible? Most importantly, identification of the components of credibility draws upon personal construct psychology, the underpinning for the repertory grid technique, a form of structured interviewing that arrives at a description of the interviewee’s constructs on a given topic, such as credibility of identity documents. This represents substantial contribution to theory, being the first research to use the repertory grid technique to elicit from experts, their mental constructs used to evaluate credibility of different types of identity documents reviewed in the course of opening new accounts. The research identified twenty-one characteristics, different ones of which are present on different types of identity documents. Expert evaluations of these documents in different scenarios suggest that visual characteristics are most important for a physical document, while authenticated personal data are most important for a digital document. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cannabis sativa is the most frequently used of all illicit drugs in the United States. Cannabis has been used throughout history for its stems in the production of hemp fiber, for its seed for oil and food, and for its buds and leaves as a psychoactive drug. Short tandem repeats (STRs), were chosen as molecular markers because of their distinct advantages over other genetic methods. STRs are co-dominant, can be standardized such that reproducibility between laboratories can be easily achieved, have a high discrimination power and can be multiplexed. ^ In this study, six STR markers previously described for Cannabis were multiplexed into one reaction. The multiplex reaction was able to individualize 98 Cannabis samples (14 hemp and 84 marijuana, authenticated as originating from 33 of the 50 United States) and detect 29 alleles averaging 4.8 alleles per loci. The data did not relate the samples from the same state to each other. This is the first study to report a single reaction six-plex and apply it to the analysis of almost 100 Cannabis samples of known geographic collection site. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Type systems for secure information flow aim to prevent a program from leaking information from H (high) to L (low) variables. Traditionally, bisimulation has been the prevalent technique for proving the soundness of such systems. This work introduces a new proof technique based on stripping and fast simulation, and shows that it can be applied in a number of cases where bisimulation fails. We present a progressive development of this technique over a representative sample of languages including a simple imperative language (core theory), a multiprocessing nondeterministic language, a probabilistic language, and a language with cryptographic primitives. In the core theory we illustrate the key concepts of this technique in a basic setting. A fast low simulation in the context of transition systems is a binary relation where simulating states can match the moves of simulated states while maintaining the equivalence of low variables; stripping is a function that removes high commands from programs. We show that we can prove secure information flow by arguing that the stripping relation is a fast low simulation. We then extend the core theory to an abstract distributed language under a nondeterministic scheduler. Next, we extend to a probabilistic language with a random assignment command; we generalize fast simulation to the setting of discrete time Markov Chains, and prove approximate probabilistic noninterference. Finally, we introduce cryptographic primitives into the probabilistic language and prove computational noninterference, provided that the underling encryption scheme is secure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In broad terms — including a thief's use of existing credit card, bank, or other accounts — the number of identity fraud victims in the United States ranges 9-10 million per year, or roughly 4% of the US adult population. The average annual theft per stolen identity was estimated at $6,383 in 2006, up approximately 22% from $5,248 in 2003; an increase in estimated total theft from $53.2 billion in 2003 to $56.6 billion in 2006. About three million Americans each year fall victim to the worst kind of identity fraud: new account fraud. Names, Social Security numbers, dates of birth, and other data are acquired fraudulently from the issuing organization, or from the victim then these data are used to create fraudulent identity documents. In turn, these are presented to other organizations as evidence of identity, used to open new lines of credit, secure loans, “flip” property, or otherwise turn a profit in a victim's name. This is much more time consuming — and typically more costly — to repair than fraudulent use of existing accounts. This research borrows from well-established theoretical backgrounds, in an effort to answer the question – what is it that makes identity documents credible? Most importantly, identification of the components of credibility draws upon personal construct psychology, the underpinning for the repertory grid technique, a form of structured interviewing that arrives at a description of the interviewee’s constructs on a given topic, such as credibility of identity documents. This represents substantial contribution to theory, being the first research to use the repertory grid technique to elicit from experts, their mental constructs used to evaluate credibility of different types of identity documents reviewed in the course of opening new accounts. The research identified twenty-one characteristics, different ones of which are present on different types of identity documents. Expert evaluations of these documents in different scenarios suggest that visual characteristics are most important for a physical document, while authenticated personal data are most important for a digital document.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of the research is to investigate the emerging data security methodologies that will work with most suitable applications in the academic, industrial and commercial environments. Of several methodologies considered for Advanced Encryption Standard (AES), MARS (block cipher) developed by IBM, has been selected. Its design takes advantage of the powerful capabilities of modern computers to allow a much higher level of performance than can be obtained from less optimized algorithms such as Data Encryption Standards (DES). MARS is unique in combining virtually every design technique known to cryptographers in one algorithm. The thesis presents the performance of 128-bit cipher flexibility, which is a scaled down version of the algorithm MARS. The cryptosystem used showed equally comparable performance in speed, flexibility and security, with that of the original algorithm. The algorithm is considered to be very secure and robust and is expected to be implemented for most of the applications.