941 resultados para Algebraic lattices


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The real-quaternionic indicator, also called the $\delta$ indicator, indicates if a self-conjugate representation is of real or quaternionic type. It is closely related to the Frobenius-Schur indicator, which we call the $\varepsilon$ indicator. The Frobenius-Schur indicator $\varepsilon(\pi)$ is known to be given by a particular value of the central character. We would like a similar result for the $\delta$ indicator. When $G$ is compact, $\delta(\pi)$ and $\varepsilon(\pi)$ coincide. In general, they are not necessarily the same. In this thesis, we will give a relation between the two indicators when $G$ is a real reductive algebraic group. This relation also leads to a formula for $\delta(\pi)$ in terms of the central character. For the second part, we consider the construction of the local Langlands correspondence of $GL(2,F)$ when $F$ is a non-Archimedean local field with odd residual characteristics. By re-examining the construction, we provide new proofs to some important properties of the correspondence. Namely, the construction is independent of the choice of additive character in the theta correspondence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The techniques of algebraic geometry have been widely and successfully applied to the study of linear codes over finite fields since the early 1980's. Recently, there has been an increased interest in the study of linear codes over finite rings. In this thesis, we combine these two approaches to coding theory by introducing and studying algebraic geometric codes over rings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

info:eu-repo/semantics/submittedForPublication

Relevância:

20.00% 20.00%

Publicador:

Resumo:

International audience

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clusters of temporal optical solitons—stable self-localized light pulses preserving their form during propagation—exhibit properties characteristic of that encountered in crystals. Here, we introduce the concept of temporal solitonic information crystals formed by the lattices of optical pulses with variable phases. The proposed general idea offers new approaches to optical coherent transmission technology and can be generalized to dispersion-managed and dissipative solitons as well as scaled to a variety of physical platforms from fiber optics to silicon chips. We discuss the key properties of such dynamic temporal crystals that mathematically correspond to non-Hermitian lattices and examine the types of collective mode instabilities determining the lifetime of the soliton train. This transfer of techniques and concepts from solid state physics to information theory promises a new outlook on information storage and transmission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the large applicability of the field capacity (FC) concept in hydrology and engineering, it presents various ambiguities and inconsistencies due to a lack of methodological procedure standardization. Experimental field and laboratory protocols taken from the literature were used in this study to determine the value of FC for different depths in 29 soil profiles, totaling 209 soil samples. The volumetric water content (θ) values were also determined at three suction values (6 kPa, 10 kPa, 33 kPa), along with bulk density (BD), texture (T) and organic matter content (OM). The protocols were devised based on the water processes involved in the FC concept aiming at minimizing hydraulic inconsistencies and procedural difficulty while maintaining the practical meaning of the concept. A high correlation between FC and θ(6 kPa) allowed the development of a pedotransfer function (Equation 3) quadratic for θ(6 kPa), resulting in an accurate and nearly bias-free calculation of FC for the four database geographic areas, with a global root mean squared residue (RMSR) of 0.026 m3·m-3. At the individual soil profile scale, the maximum RMSR was only 0.040 m3·m-3. The BD, T and OM data were generally of a low predicting quality regarding FC when not accompanied by the moisture variables. As all the FC values were obtained by the same experimental protocol and as the predicting quality of Equation 3 was clearly better than that of the classical method, which considers FC equal to θ(6), θ(10) or θ(33), we recommend using Equation 3 rather than the classical method, as well as the protocol presented here, to determine in-situ FC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we discuss our current efforts to develop and implement an exploratory, discovery mode assessment item into the total learning and assessment profile for a target group of about 100 second level engineering mathematics students. The assessment item under development is composed of 2 parts, namely, a set of "pre-lab" homework problems (which focus on relevant prior mathematical knowledge, concepts and skills), and complementary computing laboratory exercises which are undertaken within a fixed (1 hour) time frame. In particular, the computing exercises exploit the algebraic manipulation and visualisation capabilities of the symbolic algebra package MAPLE, with the aim of promoting understanding of certain mathematical concepts and skills via visual and intuitive reasoning, rather than a formal or rigorous approach. The assessment task we are developing is aimed at providing students with a significant learning experience, in addition to providing feedback on their individual knowledge and skills. To this end, a noteworthy feature of the scheme is that marks awarded for the laboratory work are primarily based on the extent to which reflective, critical thinking is demonstrated, rather than the amount of CBE-style tasks completed by the student within the allowed time. With regard to student learning outcomes, a novel and potentially critical feature of our scheme is that the assessment task is designed to be intimately linked to the overall course content, in that it aims to introduce important concepts and skills (via individual student exploration) which will be revisited somewhat later in the pedagogically more restrictive formal lecture component of the course (typically a large group plenary format). Furthermore, the time delay involved, or "incubation period", is also a deliberate design feature: it is intended to allow students the opportunity to undergo potentially important internal re-adjustments in their understanding, before being exposed to lectures on related course content which are invariably delivered in a more condensed, formal and mathematically rigorous manner. In our presentation, we will discuss in more detail our motivation and rationale for trailing such a scheme for the targeted student group. Some of the advantages and disadvantages of our approach (as we perceived them at the initial stages) will also be enumerated. In a companion paper, the theoretical framework for our approach will be more fully elaborated, and measures of student learning outcomes (as obtained from eg. student provided feedback) will be discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generalising arithmetic structures is seen as a key to developing algebraic understanding. Many adolescent students begin secondary school with a poor understanding of the structure of arithmetic. This paper presents a theory for a teaching/learning trajectory designed to build mathematical understanding and abstraction in the elementary school context. The particular focus is on the use of models and representations to construct an understanding of equivalence. The results of a longitudinal intervention study with five elementary schools, following 220 students as they progressed from Year 2 to Year 6, informed the development of this theory. Data were gathered from multiple sources including interviews, videos of classroom teaching, and pre-and post-tests. Data reduction resulted in the development of nine conjectures representing a growth in integration of models and representations. These conjectures formed the basis of the theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spoken term detection (STD) popularly involves performing word or sub-word level speech recognition and indexing the result. This work challenges the assumption that improved speech recognition accuracy implies better indexing for STD. Using an index derived from phone lattices, this paper examines the effect of language model selection on the relationship between phone recognition accuracy and STD accuracy. Results suggest that language models usually improve phone recognition accuracy but their inclusion does not always translate to improved STD accuracy. The findings suggest that using phone recognition accuracy to measure the quality of an STD index can be problematic, and highlight the need for an alternative that is more closely aligned with the goals of the specific detection task.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is about the derivation of the addition law on an arbitrary elliptic curve and efficiently adding points on this elliptic curve using the derived addition law. The outcomes of this research guarantee practical speedups in higher level operations which depend on point additions. In particular, the contributions immediately find applications in cryptology. Mastered by the 19th century mathematicians, the study of the theory of elliptic curves has been active for decades. Elliptic curves over finite fields made their way into public key cryptography in late 1980’s with independent proposals by Miller [Mil86] and Koblitz [Kob87]. Elliptic Curve Cryptography (ECC), following Miller’s and Koblitz’s proposals, employs the group of rational points on an elliptic curve in building discrete logarithm based public key cryptosystems. Starting from late 1990’s, the emergence of the ECC market has boosted the research in computational aspects of elliptic curves. This thesis falls into this same area of research where the main aim is to speed up the additions of rational points on an arbitrary elliptic curve (over a field of large characteristic). The outcomes of this work can be used to speed up applications which are based on elliptic curves, including cryptographic applications in ECC. The aforementioned goals of this thesis are achieved in five main steps. As the first step, this thesis brings together several algebraic tools in order to derive the unique group law of an elliptic curve. This step also includes an investigation of recent computer algebra packages relating to their capabilities. Although the group law is unique, its evaluation can be performed using abundant (in fact infinitely many) formulae. As the second step, this thesis progresses the finding of the best formulae for efficient addition of points. In the third step, the group law is stated explicitly by handling all possible summands. The fourth step presents the algorithms to be used for efficient point additions. In the fifth and final step, optimized software implementations of the proposed algorithms are presented in order to show that theoretical speedups of step four can be practically obtained. In each of the five steps, this thesis focuses on five forms of elliptic curves over finite fields of large characteristic. A list of these forms and their defining equations are given as follows: (a) Short Weierstrass form, y2 = x3 + ax + b, (b) Extended Jacobi quartic form, y2 = dx4 + 2ax2 + 1, (c) Twisted Hessian form, ax3 + y3 + 1 = dxy, (d) Twisted Edwards form, ax2 + y2 = 1 + dx2y2, (e) Twisted Jacobi intersection form, bs2 + c2 = 1, as2 + d2 = 1, These forms are the most promising candidates for efficient computations and thus considered in this work. Nevertheless, the methods employed in this thesis are capable of handling arbitrary elliptic curves. From a high level point of view, the following outcomes are achieved in this thesis. - Related literature results are brought together and further revisited. For most of the cases several missed formulae, algorithms, and efficient point representations are discovered. - Analogies are made among all studied forms. For instance, it is shown that two sets of affine addition formulae are sufficient to cover all possible affine inputs as long as the output is also an affine point in any of these forms. In the literature, many special cases, especially interactions with points at infinity were omitted from discussion. This thesis handles all of the possibilities. - Several new point doubling/addition formulae and algorithms are introduced, which are more efficient than the existing alternatives in the literature. Most notably, the speed of extended Jacobi quartic, twisted Edwards, and Jacobi intersection forms are improved. New unified addition formulae are proposed for short Weierstrass form. New coordinate systems are studied for the first time. - An optimized implementation is developed using a combination of generic x86-64 assembly instructions and the plain C language. The practical advantages of the proposed algorithms are supported by computer experiments. - All formulae, presented in the body of this thesis, are checked for correctness using computer algebra scripts together with details on register allocations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When asymptotic series methods are applied in order to solve problems that arise in applied mathematics in the limit that some parameter becomes small, they are unable to demonstrate behaviour that occurs on a scale that is exponentially small compared to the algebraic terms of the asymptotic series. There are many examples of physical systems where behaviour on this scale has important effects and, as such, a range of techniques known as exponential asymptotic techniques were developed that may be used to examinine behaviour on this exponentially small scale. Many problems in applied mathematics may be represented by behaviour within the complex plane, which may subsequently be examined using asymptotic methods. These problems frequently demonstrate behaviour known as Stokes phenomenon, which involves the rapid switches of behaviour on an exponentially small scale in the neighbourhood of some curve known as a Stokes line. Exponential asymptotic techniques have been applied in order to obtain an expression for this exponentially small switching behaviour in the solutions to orginary and partial differential equations. The problem of potential flow over a submerged obstacle has been previously considered in this manner by Chapman & Vanden-Broeck (2006). By representing the problem in the complex plane and applying an exponential asymptotic technique, they were able to detect the switching, and subsequent behaviour, of exponentially small waves on the free surface of the flow in the limit of small Froude number, specifically considering the case of flow over a step with one Stokes line present in the complex plane. We consider an extension of this work to flow configurations with multiple Stokes lines, such as flow over an inclined step, or flow over a bump or trench. The resultant expressions are analysed, and demonstrate interesting implications, such as the presence of exponentially sub-subdominant intermediate waves and the possibility of trapped surface waves for flow over a bump or trench. We then consider the effect of multiple Stokes lines in higher order equations, particu- larly investigating the behaviour of higher-order Stokes lines in the solutions to partial differential equations. These higher-order Stokes lines switch off the ordinary Stokes lines themselves, adding a layer of complexity to the overall Stokes structure of the solution. Specifically, we consider the different approaches taken by Howls et al. (2004) and Chap- man & Mortimer (2005) in applying exponential asymptotic techniques to determine the higher-order Stokes phenomenon behaviour in the solution to a particular partial differ- ential equation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a simple and intuitive approach to determining the kinematic parameters of a serial-link robot in Denavit– Hartenberg (DH) notation. Once a manipulator’s kinematics is parameterized in this form, a large body of standard algorithms and code implementations for kinematics, dynamics, motion planning, and simulation are available. The proposed method has two parts. The first is the “walk through,” a simple procedure that creates a string of elementary translations and rotations, from the user-defined base coordinate to the end-effector. The second step is an algebraic procedure to manipulate this string into a form that can be factorized as link transforms, which can be represented in standard or modified DH notation. The method allows for an arbitrary base and end-effector coordinate system as well as an arbitrary zero joint angle pose. The algebraic procedure is amenable to computer algebra manipulation and a Java program is available as supplementary downloadable material.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is devoted to the study of linear relationships in symmetric block ciphers. A block cipher is designed so that the ciphertext is produced as a nonlinear function of the plaintext and secret master key. However, linear relationships within the cipher can still exist if the texts and components of the cipher are manipulated in a number of ways, as shown in this thesis. There are four main contributions of this thesis. The first contribution is the extension of the applicability of integral attacks from word-based to bitbased block ciphers. Integral attacks exploit the linear relationship between texts at intermediate stages of encryption. This relationship can be used to recover subkey bits in a key recovery attack. In principle, integral attacks can be applied to bit-based block ciphers. However, specific tools to define the attack on these ciphers are not available. This problem is addressed in this thesis by introducing a refined set of notations to describe the attack. The bit patternbased integral attack is successfully demonstrated on reduced-round variants of the block ciphers Noekeon, Present and Serpent. The second contribution is the discovery of a very small system of equations that describe the LEX-AES stream cipher. LEX-AES is based heavily on the 128-bit-key (16-byte) Advanced Encryption Standard (AES) block cipher. In one instance, the system contains 21 equations and 17 unknown bytes. This is very close to the upper limit for an exhaustive key search, which is 16 bytes. One only needs to acquire 36 bytes of keystream to generate the equations. Therefore, the security of this cipher depends on the difficulty of solving this small system of equations. The third contribution is the proposal of an alternative method to measure diffusion in the linear transformation of Substitution-Permutation-Network (SPN) block ciphers. Currently, the branch number is widely used for this purpose. It is useful for estimating the possible success of differential and linear attacks on a particular SPN cipher. However, the measure does not give information on the number of input bits that are left unchanged by the transformation when producing the output bits. The new measure introduced in this thesis is intended to complement the current branch number technique. The measure is based on fixed points and simple linear relationships between the input and output words of the linear transformation. The measure represents the average fraction of input words to a linear diffusion transformation that are not effectively changed by the transformation. This measure is applied to the block ciphers AES, ARIA, Serpent and Present. It is shown that except for Serpent, the linear transformations used in the block ciphers examined do not behave as expected for a random linear transformation. The fourth contribution is the identification of linear paths in the nonlinear round function of the SMS4 block cipher. The SMS4 block cipher is used as a standard in the Chinese Wireless LAN Wired Authentication and Privacy Infrastructure (WAPI) and hence, the round function should exhibit a high level of nonlinearity. However, the findings in this thesis on the existence of linear relationships show that this is not the case. It is shown that in some exceptional cases, the first four rounds of SMS4 are effectively linear. In these cases, the effective number of rounds for SMS4 is reduced by four, from 32 to 28. The findings raise questions about the security provided by SMS4, and might provide clues on the existence of a flaw in the design of the cipher.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.