949 resultados para Gray code


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we

•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;

•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;

•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;

•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;

•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.

The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we

•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;

•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.

The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

针对CBC模式在分块适应性攻击模型下不安全这一问题,提出了一个新的分组密码工作模式。新方案引进了Gray码,改变了原有模式的输入方式,打乱了前后输出输入的内在联系。同时,利用规约的思想对其安全性进行了分析。结果表明,在所用分组密码是伪随机置换的条件下,方案在分块适应性攻击模型下是可证明安全的。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses the DNA code analysis in the perspective of dynamics and fractional calculus. Several mathematical tools are selected to establish a quantitative method without distorting the alphabet represented by the sequence of DNA bases. The association of Gray code, Fourier transform and fractional calculus leads to a categorical representation of species and chromosomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper analyzes the DNA code of several species in the perspective of information content. For that purpose several concepts and mathematical tools are selected towards establishing a quantitative method without a priori distorting the alphabet represented by the sequence of DNA bases. The synergies of associating Gray code, histogram characterization and multidimensional scaling visualization lead to a collection of plots with a categorical representation of species and chromosomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’objectif de cette thèse est d’analyser et de comprendre la dynamique de la controverse autour de l’adoption en 2009 du code des personnes et de la famille au Mali. Elle s’intéresse particulièrement aux principaux enjeux, c'est-à-dire aux questions à l’origine de cette controverse ainsi qu’aux stratégies mises en place par les différents acteurs sociaux (les organisations islamiques et leurs alliés, d’une part, et d’autre part, les organisations féminines et les leurs) afin d’infléchir le processus. En plus du pourquoi et du comment de cette controverse, notre recherche visait à comprendre le bilan du processus tiré par les acteurs eux-mêmes, le sentiment qui les anime à l’issu de ce long processus, leur appréciation de leur expérience, et leur vision de l’avenir. Pour étudier cette problématique, nous avons choisi l’approche de l’action collective protestataire, laquelle s’inspire à la fois des théories de l’action collective, et de celles des mouvements sociaux et des dynamiques contestataires. Afin d’analyser les enjeux au cœur de cette controverse, les stratégies utilisées par les acteurs ainsi que leur bilan du processus, nous avons opté pour une démarche qualitative. En plus de la littérature grise, des articles de presse, documents audio et audiovisuels sur le sujet, notre travail de terrain de quatre mois dans la capitale malienne nous a permis de réaliser plusieurs entrevues auprès des acteurs impliqués dans ce processus. S’étendant de 1996 à 2011, soit seize ans, l’élaboration du code des personnes et de la famille au Mali fut un processus long, complexe, inhabituel et controversé. Les résultats de notre recherche révèlent que plusieurs enjeux, notamment sociaux, étaient au cœur de cette controverse : le «devoir d’obéissance » de la femme à son mari, la légalisation du mariage religieux, l’« égalité » entre fille et garçon en matière d’héritage et de succession et la reconnaissance de l’enfant naturel ont été les questions qui ont suscité le plus de débats. Si durant tout le processus, les questions relatives à l’égalité de genre, au respect des droits de la femme et de l’enfant, étaient les arguments défendus par les organisations féminines et leurs alliés, celles relatives au respect des valeurs religieuses (islamiques), sociétales ou socioculturelles maliennes étaient, par contre, mises de l’avant par les organisations islamiques et leurs alliés. Ainsi, si le discours des OSC féminines portait essentiellement sur le « respect de l’égalité des sexes » conformément aux engagements internationaux signés par le Mali, celui des OSC islamiques s’est, en revanche, centré sur le « respect des valeurs islamiques et socioculturelles » du Mali. Quant aux canaux de communication, les OSC féminines se sont focalisées sur les canaux classiques comme la presse, les radios, les conférences, entre autres. Les OSC islamiques ont également utilisé ces canaux, mais elles se sont distinguées des OSC féminines en utilisant aussi les prêches. Organisés généralement dans les mosquées et autres espaces désignés à cet effet, ces prêches ont consacré la victoire des OSC islamiques. Les radios islamiques ont joué elles aussi un rôle important dans la transmission de leurs messages. Pour ce qui est des stratégies d’actions, l’action collective qui a changé la donne en faveur des OSC islamiques (renvoi du code en seconde lecture, prise en compte de leurs idées), a été le meeting du 22 août 2009 à Bamako, précédé de marches de protestation dans la capitale nationale et toutes les capitales régionales du pays. Quant aux OSC féminines, elles n’ont mené que quelques actions classiques (ou habituelle) comme les pétitions, le plaidoyer-lobbying, les conférences-débats, au point que certains observateurs ont parlé de « stratégie d’inaction » chez elles. L’analyse a également révélé l’utilisation de stratégies inusitées de menaces et d’intimidation par certains acteurs du camp des OSC islamiques à l’endroit des partisans du code. Si chaque groupe d’acteurs a noué des alliances avec des acteurs locaux, les OSC féminines sont les seules à reconnaitre des alliances avec les acteurs extérieurs. Aujourd’hui, si la plupart des membres des OSC islamiques ne cachent pas leur satisfaction face à leur « victoire » et se présentent en « sauveur de la nation malienne », la plupart des membres des OSC féminines se disent, quant à elles, très « déçues » et « indignées » face à l’adoption du code actuel. Elles ne comprennent pas pourquoi d’un « code progressiste », le Mali s’est retrouvé avec un « code rétrograde et discriminatoire » envers les femmes. La thèse confirme non seulement la difficile conciliation entre droit coutumier, loi islamique et droit « moderne », mais également l’idée que le droit demeure l’expression des rapports de pouvoir et de domination. Enfin, notre recherche confirme la désormais incontournable influence des acteurs religieux sur le processus d’élaboration des politiques publiques au Mali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Faces are complex patterns that often differ in only subtle ways. Face recognition algorithms have difficulty in coping with differences in lighting, cameras, pose, expression, etc. We propose a novel approach for facial recognition based on a new feature extraction method called fractal image-set encoding. This feature extraction method is a specialized fractal image coding technique that makes fractal codes more suitable for object and face recognition. A fractal code of a gray-scale image can be divided in two parts – geometrical parameters and luminance parameters. We show that fractal codes for an image are not unique and that we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters – which are faster to compute. Results on a subset of the XM2VTS database are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article explains the relevance of the Code and its place in the regulatory framework, discusses some of the key issues arising in the recent review (as identified by consumer advocates1), and explains the relationship between the Code and the Financial Ombudsman Service.