999 resultados para Sobolev inner products
Resumo:
In this paper, we show how to compute in O(n2) steps the Fourier coefficients associated with the Gelfand-Levitan approach for discrete Sobolev orthogonal polynomials on the unit circle when the support of the discrete component involving derivatives is located outside the closed unit disk. As a consequence, we deduce the outer relative asymptotics of these polynomials in terms of those associated with the original orthogonality measure. Moreover, we show how to recover the discrete part of our Sobolev inner product. © 2013 Elsevier Inc. All rights reserved.
Resumo:
Predicate encryption is a new primitive that supports flexible control over access to encrypted data. We study predicate encryption systems, evaluating a wide class of predicates. Our systems are more expressive than the existing attribute-hiding systems in the sense that the proposed constructions support not only all existing predicate evaluations but also arbitrary conjunctions and disjunctions of comparison and subset queries. Toward our goal, we propose encryption schemes supporting multi-inner-product predicate and provide formal security analysis. We show how to apply the proposed schemes to achieve all those predicate evaluations.
Resumo:
Inner products of the type < f, g >(S) = < f, g >psi(0) + < f', g'>psi(1), where one of the measures psi(0) or psi(1) is the measure associated with the Gegenbauer polynomials, are usually referred to as Gegenbauer-Sobolev inner products. This paper deals with some asymptotic relations for the orthogonal polynomials with respect to a class of Gegenbauer-Sobolev inner products. The inner products are such that the associated pairs of symmetric measures (psi(0), psi(1)) are not within the concept of symmetrically coherent pairs of measures.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Two methods for calculating inner products of Schur functions in terms of outer products and plethysms are given and they are easy to implement on a machine. One of these is derived from a recent analysis of the SO(8) proton-neutron pairing model of atomic nuclei. The two methods allow for generation of inner products for the Schur functions of degree up to 20 and even beyond.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Mathematical models are often used to describe physical realities. However, the physical realities are imprecise while the mathematical concepts are required to be precise and perfect. Even mathematicians like H. Poincare worried about this. He observed that mathematical models are over idealizations, for instance, he said that only in Mathematics, equality is a transitive relation. A first attempt to save this situation was perhaps given by K. Menger in 1951 by introducing the concept of statistical metric space in which the distance between points is a probability distribution on the set of nonnegative real numbers rather than a mere nonnegative real number. Other attempts were made by M.J. Frank, U. Hbhle, B. Schweizer, A. Sklar and others. An aspect in common to all these approaches is that they model impreciseness in a probabilistic manner. They are not able to deal with situations in which impreciseness is not apparently of a probabilistic nature. This thesis is confined to introducing and developing a theory of fuzzy semi inner product spaces.
Resumo:
Mathematical models are often used to describe physical realities. However, the physical realities are imprecise while the mathematical concepts are required to be precise and perfect. The 1st chapter give a brief summary of the arithmetic of fuzzy real numbers and the fuzzy normed algebra M(I). Also we explain a few preliminary definitions and results required in the later chapters. Fuzzy real numbers are introduced by Hutton,B [HU] and Rodabaugh, S.E[ROD]. Our definition slightly differs from this with an additional minor restriction. The definition of Clementina Felbin [CL1] is entirely different. The notations of [HU]and [M;Y] are retained inspite of the slight difference in the concept.the 3rd chapter In this chapter using the completion M'(I) of M(I) we give a fuzzy extension of real Hahn-Banch theorem. Some consequences of this extension are obtained. The idea of real fuzzy linear functional on fuzzy normed linear space is introduced. Some of its properties are studied. In the complex case we get only a slightly weaker analogue for the Hahn-Banch theorem, than the one [B;N] in the crisp case
Asymptotics for Jacobi-Sobolev orthogonal polynomials associated with non-coherent pairs of measures
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
This dissertation reformulates and streamlines the core tools of robustness analysis for linear time invariant systems using now-standard methods in convex optimization. In particular, robust performance analysis can be formulated as a primal convex optimization in the form of a semidefinite program using a semidefinite representation of a set of Gramians. The same approach with semidefinite programming duality is applied to develop a linear matrix inequality test for well-connectedness analysis, and many existing results such as the Kalman-Yakubovich--Popov lemma and various scaled small gain tests are derived in an elegant fashion. More importantly, unlike the classical approach, a decision variable in this novel optimization framework contains all inner products of signals in a system, and an algorithm for constructing an input and state pair of a system corresponding to the optimal solution of robustness optimization is presented based on this information. This insight may open up new research directions, and as one such example, this dissertation proposes a semidefinite programming relaxation of a cardinality constrained variant of the H ∞ norm, which we term sparse H ∞ analysis, where an adversarial disturbance can use only a limited number of channels. Finally, sparse H ∞ analysis is applied to the linearized swing dynamics in order to detect potential vulnerable spots in power networks.
Resumo:
The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.
The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.
The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.
The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.