860 resultados para Associative Algebras With Polynomial Identities
Resumo:
In this paper we consider polynomial representability of functions defined over , where p is a prime and n is a positive integer. Our aim is to provide an algorithmic characterization that (i) answers the decision problem: to determine whether a given function over is polynomially representable or not, and (ii) finds the polynomial if it is polynomially representable. The previous characterizations given by Kempner (Trans. Am. Math. Soc. 22(2):240-266, 1921) and Carlitz (Acta Arith. 9(1), 67-78, 1964) are existential in nature and only lead to an exhaustive search method, i.e. algorithm with complexity exponential in size of the input. Our characterization leads to an algorithm whose running time is linear in size of input. We also extend our result to the multivariate case.
Resumo:
In this paper, we present novel precoding methods for multiuser Rayleigh fading multiple-input-multiple-output (MIMO) systems when channel state information (CSI) is available at the transmitter (CSIT) but not at the receiver (CSIR). Such a scenario is relevant, for example, in time-division duplex (TDD) MIMO communications, where, due to channel reciprocity, CSIT can be directly acquired by sending a training sequence from the receiver to the transmitter(s). We propose three transmit precoding schemes that convert the fading MIMO channel into a fixed-gain additive white Gaussian noise (AWGN) channel while satisfying an average power constraint. We also extend one of the precoding schemes to the multiuser Rayleigh fading multiple-access channel (MAC), broadcast channel (BC), and interference channel (IC). The proposed schemes convert the fading MIMO channel into fixed-gain parallel AWGN channels in all three cases. Hence, they achieve an infinite diversity order, which is in sharp contrast to schemes based on perfect CSIR and no CSIT, which, at best, achieve a finite diversity order. Further, we show that a polynomial diversity order is retained, even in the presence of channel estimation errors at the transmitter. Monte Carlo simulations illustrate the bit error rate (BER) performance obtainable from the proposed precoding scheme compared with existing transmit precoding schemes.
Resumo:
We study the problem of finding small s-t separators that induce graphs having certain properties. It is known that finding a minimum clique s-t separator is polynomial-time solvable (Tarjan in Discrete Math. 55:221-232, 1985), while for example the problems of finding a minimum s-t separator that induces a connected graph or forms an independent set are fixed-parameter tractable when parameterized by the size of the separator (Marx et al. in ACM Trans. Algorithms 9(4): 30, 2013). Motivated by these results, we study properties that generalize cliques, independent sets, and connected graphs, and determine the complexity of finding separators satisfying these properties. We investigate these problems also on bounded-degree graphs. Our results are as follows: Finding a minimum c-connected s-t separator is FPT for c=2 and W1]-hard for any ca parts per thousand yen3. Finding a minimum s-t separator with diameter at most d is W1]-hard for any da parts per thousand yen2. Finding a minimum r-regular s-t separator is W1]-hard for any ra parts per thousand yen1. For any decidable graph property, finding a minimum s-t separator with this property is FPT parameterized jointly by the size of the separator and the maximum degree. Finding a connected s-t separator of minimum size does not have a polynomial kernel, even when restricted to graphs of maximum degree at most 3, unless .
Resumo:
The 3-Hitting Set problem involves a family of subsets F of size at most three over an universe U. The goal is to find a subset of U of the smallest possible size that intersects every set in F. The version of the problem with parity constraints asks for a subset S of size at most k that, in addition to being a hitting set, also satisfies certain parity constraints on the sizes of the intersections of S with each set in the family F. In particular, an odd (even) set is a hitting set that hits every set at either one or three (two) elements, and a perfect code is a hitting set that intersects every set at exactly one element. These questions are of fundamental interest in many contexts for general set systems. Just as for Hitting Set, we find these questions to be interesting for the case of families consisting of sets of size at most three. In this work, we initiate an algorithmic study of these problems in this special case, focusing on a parameterized analysis. We show, for each problem, efficient fixed-parameter tractable algorithms using search trees that are tailor-made to the constraints in question, and also polynomial kernels using sunflower-like arguments in a manner that accounts for equivalence under the additional parity constraints.
Resumo:
The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.
Resumo:
Modeling the spatial variability that exists in pavement systems can be conveniently represented by means of random fields; in this study, a probabilistic analysis that considers the spatial variability, including the anisotropic nature of the pavement layer properties, is presented. The integration of the spatially varying log-normal random fields into a linear-elastic finite difference analysis has been achieved through the expansion optimal linear estimation method. For the estimation of the critical pavement responses, metamodels based on polynomial chaos expansion (PCE) are developed to replace the computationally expensive finite-difference model. The sparse polynomial chaos expansion based on an adaptive regression-based algorithm, and enhanced by the combined use of the global sensitivity analysis (GSA) is used, with significant savings in computational effort. The effect of anisotropy in each layer on the pavement responses was studied separately, and an effort is made to identify the pavement layer wherein the introduction of anisotropic characteristics results in the most significant impact on the critical strains. It is observed that the anisotropy in the base layer has a significant but diverse effect on both critical strains. While the compressive strain tends to be considerably higher than that observed for the isotropic section, the tensile strains show a decrease in the mean value with the introduction of base-layer anisotropy. Furthermore, asphalt-layer anisotropy also tends to decrease the critical tensile strain while having little effect on the critical compressive strain. (C) 2015 American Society of Civil Engineers.
Resumo:
The performance of two curved beam finite element models based on coupled polynomial displacement fields is investigated for out-of-plane vibration of arches. These two-noded beam models employ curvilinear strain definitions and have three degrees of freedom per node namely, out-of-plane translation (v), out-of-plane bending rotation (theta(z)) and torsion rotation (theta(s)). The coupled polynomial interpolation fields are derived independently for Timoshenko and Euler-Bernoulli beam elements using the force-moment equilibrium equations. Numerical performance of these elements for constrained and unconstrained arches is compared with the conventional curved beam models which are based on independent polynomial fields. The formulation is shown to be free from any spurious constraints in the limit of `flexureless torsion' and `torsionless flexure' and hence devoid of flexure and torsion locking. The resulting stiffness and consistent mass matrices generated from the coupled displacement models show excellent convergence of natural frequencies in locking regimes. The accuracy of the shear flexibility added to the elements is also demonstrated. The coupled polynomial models are shown to perform consistently over a wide range of flexure-to-shear (EI/GA) and flexure-to-torsion (EI/GJ) stiffness ratios and are inherently devoid of flexure, torsion and shear locking phenomena. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
In this paper an explicit guidance law for the powered descent phase of the soft lunar landing is presented. The descent trajectory, expressed in polynomial form is fixed based on the boundary conditions imposed by the precise soft landing mission. Adapting an inverse model based approach, the guidance command is computed from the known spacecraft trajectory. The guidance formulation ensures the vertical orientation of the spacecraft during touchdown. Also a closed form relation for the final flight time is proposed. The final time is expressed as a function of initial position and velocity of the spacecraft ( at the start of descent) and also depends on the desired landing site. To ensure the fuel minimum descent the proposed explicit method is extended to optimal guidance formulation. The effectiveness of the proposed guidance laws are demonstrated with simulation results.
Resumo:
The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A common form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires O(sigma(2)) operations per pixel, where sigma is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approximation algorithm that can cut down the computational complexity to O(1) per pixel for any arbitrary sigma (constant-time implementation). This is based on the observation that the range kernel operates via the translations of a fixed Gaussian over the range space, and that these translated Gaussians can be accurately approximated using the so-called Gauss-polynomials. The overall algorithm emerging from this approximation involves a series of spatial Gaussian filtering, which can be efficiently implemented (in parallel) using separability and recursion. We present some preliminary results to demonstrate that the proposed algorithm compares favorably with some of the existing fast algorithms in terms of speed and accuracy.
Resumo:
In this paper we introduce a weighted complex networks model to investigate and recognize structures of patterns. The regular treating in pattern recognition models is to describe each pattern as a high-dimensional vector which however is insufficient to express the structural information. Thus, a number of methods are developed to extract the structural information, such as different feature extraction algorithms used in pre-processing steps, or the local receptive fields in convolutional networks. In our model, each pattern is attributed to a weighted complex network, whose topology represents the structure of that pattern. Based upon the training samples, we get several prototypal complex networks which could stand for the general structural characteristics of patterns in different categories. We use these prototypal networks to recognize the unknown patterns. It is an attempt to use complex networks in pattern recognition, and our result shows the potential for real-world pattern recognition. A spatial parameter is introduced to get the optimal recognition accuracy, and it remains constant insensitive to the amount of training samples. We have discussed the interesting properties of the prototypal networks. An approximate linear relation is found between the strength and color of vertexes, in which we could compare the structural difference between each category. We have visualized these prototypal networks to show that their topology indeed represents the common characteristics of patterns. We have also shown that the asymmetric strength distribution in these prototypal networks brings high robustness for recognition. Our study may cast a light on understanding the mechanism of the biologic neuronal systems in object recognition as well.
Resumo:
This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.
In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.
This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.
Resumo:
This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.
The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.
The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.
The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.
Resumo:
This thesis consists of two independent chapters. The first chapter deals with universal algebra. It is shown, in von Neumann-Bernays-Gӧdel set theory, that free images of partial algebras exist in arbitrary varieties. It follows from this, as set-complete Boolean algebras form a variety, that there exist free set-complete Boolean algebras on any class of generators. This appears to contradict a well-known result of A. Hales and H. Gaifman, stating that there is no complete Boolean algebra on any infinite set of generators. However, it does not, as the algebras constructed in this chapter are allowed to be proper classes. The second chapter deals with positive elementary inductions. It is shown that, in any reasonable structure ᶆ, the inductive closure ordinal of ᶆ is admissible, by showing it is equal to an ordinal measuring the saturation of ᶆ. This is also used to show that non-recursively saturated models of the theories ACF, RCF, and DCF have inductive closure ordinals greater than ω.
Resumo:
In this thesis, we consider two main subjects: refined, composite invariants and exceptional knot homologies of torus knots. The main technical tools are double affine Hecke algebras ("DAHA") and various insights from topological string theory.
In particular, we define and study the composite DAHA-superpolynomials of torus knots, which depend on pairs of Young diagrams and generalize the composite HOMFLY-PT polynomials from the full HOMFLY-PT skein of the annulus. We also describe a rich structure of differentials that act on homological knot invariants for exceptional groups. These follow from the physics of BPS states and the adjacencies/spectra of singularities associated with Landau-Ginzburg potentials. At the end, we construct two DAHA-hyperpolynomials which are closely related to the Deligne-Gross exceptional series of root systems.
In addition to these main themes, we also provide new results connecting DAHA-Jones polynomials to quantum torus knot invariants for Cartan types A and D, as well as the first appearance of quantum E6 knot invariants in the literature.
Resumo:
Topography of a granite surface has an effect on the vertical positioning of a wafer stage in a lithographic tool, when the wafer stage moves on the granite. The inaccurate measurement of the topography results in a bad leveling and focusing performance. In this paper, an in situ method to measure the topography of a granite surface with high accuracy is present. In this method, a high-order polynomial is set up to express the topography of the granite surface. Two double-frequency laser interferometers are used to measure the tilts of the wafer stage in the X- and Y-directions. From the sampling tilts information, the coefficients of the high-order polynomial can be obtained by a special algorithm. Experiment results shows that the measurement reproducibility of the method is better than 10 nm. (c) 2006 Elsevier GmbH. All rights reserved.