910 resultados para Multi-modal information processing


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cryptographic hash functions are an important tool of cryptography and play a fundamental role in efficient and secure information processing. A hash function processes an arbitrary finite length input message to a fixed length output referred to as the hash value. As a security requirement, a hash value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash functions, also called message authentication codes (MACs) serve data integrity and data origin authentication in the secret key setting. The building blocks of hash functions can be designed using block ciphers, modular arithmetic or from scratch. The design principles of the popular Merkle–Damgård construction are followed in almost all widely used standard hash functions such as MD5 and SHA-1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oscillations of neural activity may bind widespread cortical areas into a neural representation that encodes disparate aspects of an event. In order to test this theory we have turned to data collected from complex partial epilepsy (CPE) patients with chronically implanted depth electrodes. Data from regions critical to word and face information processing was analyzed using spectral coherence measurements. Similar analyses of intracranial EEG (iEEG) during seizure episodes display HippoCampal Formation (HCF)—NeoCortical (NC) spectral coherence patterns that are characteristic of specific seizure stages (Klopp et al. 1996). We are now building a computational memory model to examine whether spatio-temporal patterns of human iEEG spectral coherence emerge in a computer simulation of HCF cellular distribution, membrane physiology and synaptic connectivity. Once the model is reasonably scaled it will be used as a tool to explore neural parameters that are critical to memory formation and epileptogenesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar to the second preimage resistance rather than on the collision resistance property of the hash functions. One of the randomized hash function modes was named the RMX hash function mode and was recommended for practical purposes. The National Institute of Standards and Technology (NIST), USA standardized a variant of the RMX hash function mode and published this standard in the Special Publication (SP) 800-106. In this article, we first discuss a generic online birthday existential forgery attack of Dang and Perlner on the RMX-hash-then-sign schemes. We show that a variant of this attack can be applied to forge the other randomize-hash-then-sign schemes. We point out practical limitations of the generic forgery attack on the RMX-hash-then-sign schemes. We then show that these limitations can be overcome for the RMX-hash-then-sign schemes if it is easy to find fixed points for the underlying compression functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online birthday forgery attack on this class of signatures by using a variant of Dean’s method of finding fixed point expandable messages for hash functions based on the Davies-Meyer construction. This forgery attack is also applicable to signature schemes based on the variant of RMX standardized by NIST in SP 800-106. We discuss some important applications of our attacks and discuss their applicability on signature schemes based on hash functions with ‘built-in’ randomization. Finally, we compare our attacks on randomize-hash-then-sign schemes with the generic forgery attacks on the standard hash-based message authentication code (HMAC).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the modern era of information and communication technology, cryptographic hash functions play an important role in ensuring the authenticity, integrity, and nonrepudiation goals of information security as well as efficient information processing. This entry provides an overview of the role of hash functions in information security, popular hash function designs, some important analytical results, and recent advances in this field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The notion of being sure that you have completely eradicated an invasive species is fanciful because of imperfect detection and persistent seed banks. Eradication is commonly declared either on an ad hoc basis, on notions of seed bank longevity, or on setting arbitrary thresholds of 1% or 5% confidence that the species is not present. Rather than declaring eradication at some arbitrary level of confidence, we take an economic approach in which we stop looking when the expected costs outweigh the expected benefits. We develop theory that determines the number of years of absent surveys required to minimize the net expected cost. Given detection of a species is imperfect, the optimal stopping time is a trade-off between the cost of continued surveying and the cost of escape and damage if eradication is declared too soon. A simple rule of thumb compares well to the exact optimal solution using stochastic dynamic programming. Application of the approach to the eradication programme of Helenium amarum reveals that the actual stopping time was a precautionary one given the ranges for each parameter. © 2006 Blackwell Publishing Ltd/CNRS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The NLM stream cipher designed by Hoon Jae Lee, Sang Min Sung, Hyeong Rag Kim is a strengthened version of the LM summation generator that combines linear and non-linear feedback shift registers. In recent works, the NLM cipher has been used for message authentication in lightweight communication over wireless sensor networks and for RFID authentication protocols. The work analyses the security of the NLM stream cipher and the NLM-MAC scheme that is built on the top of the NLM cipher. We first show that the NLM cipher suffers from two major weaknesses that lead to key recovery and forgery attacks. We prove the internal state of the NLM cipher can be recovered with time complexity about nlog7×2, where the total length of internal state is 2⋅n+22⋅n+2 bits. The attack needs about n2n2 key-stream bits. We also show adversary is able to forge any MAC tag very efficiently by having only one pair (MAC tag, ciphertext). The proposed attacks are practical and break the scheme with a negligible error probability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider online prediction problems where the loss between the prediction and the outcome is measured by the squared Euclidean distance and its generalization, the squared Mahalanobis distance. We derive the minimax solutions for the case where the prediction and action spaces are the simplex (this setup is sometimes called the Brier game) and the \ell_2 ball (this setup is related to Gaussian density estimation). We show that in both cases the value of each sub-game is a quadratic function of a simple statistic of the state, with coefficients that can be efficiently computed using an explicit recurrence relation. The resulting deterministic minimax strategy and randomized maximin strategy are linear functions of the statistic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the development of trust in the use of Open Data through incorporation of appropriate authentication and integrity parameters for use by end user Open Data application developers in an architecture for trustworthy Open Data Services. The advantages of this architecture scheme is that it is far more scalable, not another certificate-based hierarchy that has problems with certificate revocation management. With the use of a Public File, if the key is compromised: it is a simple matter of the single responsible entity replacing the key pair with a new one and re-performing the data file signing process. Under this proposed architecture, the the Open Data environment does not interfere with the internal security schemes that might be employed by the entity. However, this architecture incorporates, when needed, parameters from the entity, e.g. person who authorized publishing as Open Data, at the time that datasets are created/added.