963 resultados para integer disaggregation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the recent past one of the main concern of research in the field of Hypercomplex Function Theory in Clifford Algebras was the development of a variety of new tools for a deeper understanding about its true elementary roots in the Function Theory of one Complex Variable. Therefore the study of the space of monogenic (Clifford holomorphic) functions by its stratification via homogeneous monogenic polynomials is a useful tool. In this paper we consider the structure of those polynomials of four real variables with binomial expansion. This allows a complete characterization of sequences of 4D generalized monogenic Appell polynomials by three different types of polynomials. A particularly important case is that of monogenic polynomials which are simply isomorphic to the integer powers of one complex variable and therefore also called pseudo-complex powers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We say that a polygon inscribed in the circle is asymmetric if it contains no two antipodal points being the endpoints of a diameter. Given n diameters of a circle and a positive integer k < n, this paper addresses the problem of computing a maximum area asymmetric k-gon having as vertices k < n endpoints of the given diameters. The study of this type of polygons is motivated by ethnomusiciological applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This analysis paper presents previously unknown properties of some special cases of the Wright function whose consideration is necessitated by our work on probability theory and the theory of stochastic processes. Specifically, we establish new asymptotic properties of the particular Wright function 1Ψ1(ρ, k; ρ, 0; x) = X∞ n=0 Γ(k + ρn) Γ(ρn) x n n! (|x| < ∞) when the parameter ρ ∈ (−1, 0)∪(0, ∞) and the argument x is real. In the probability theory applications, which are focused on studies of the Poisson-Tweedie mixtures, the parameter k is a non-negative integer. Several representations involving well-known special functions are given for certain particular values of ρ. The asymptotics of 1Ψ1(ρ, k; ρ, 0; x) are obtained under numerous assumptions on the behavior of the arguments k and x when the parameter ρ is both positive and negative. We also provide some integral representations and structural properties involving the ‘reduced’ Wright function 0Ψ1(−−; ρ, 0; x) with ρ ∈ (−1, 0) ∪ (0, ∞), which might be useful for the derivation of new properties of members of the power-variance family of distributions. Some of these imply a reflection principle that connects the functions 0Ψ1(−−;±ρ, 0; ·) and certain Bessel functions. Several asymptotic relationships for both particular cases of this function are also given. A few of these follow under additional constraints from probability theory results which, although previously available, were unknown to analysts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cremona developed a reduction theory for binary forms of degree 3 and 4 with integer coefficients, the motivation in the case of quartics being to improve 2-descent algorithms for elliptic curves over Q. In this paper we extend some of these results to forms of higher degree. One application of this is to the study of hyperelliptic curves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present efficient algorithms for solving Legendre equations over Q (equivalently, for finding rational points on rational conics) and parametrizing all solutions. Unlike existing algorithms, no integer factorization is required, provided that the prime factors of the discriminant are known.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a new numerical method to solve fractional differential equations. Given a fractional derivative of arbitrary real order, we present an approximation formula for the fractional operator that involves integer-order derivatives only. With this, we can rewrite FDEs in terms of a classical one and then apply any known technique. With some examples, we show the accuracy of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We estimate the monthly volatility of the US economy from 1968 to 2006 by extending the coincidentindex model of Stock and Watson (1991). Our volatility index, which we call VOLINX, hasfour applications. First, it sheds light on the Great Moderation. VOLINX captures the decrease in thevolatility in the mid-80s as well as the different episodes of stress over the sample period. In the 70sand early 80s the stagflation and the two oil crises marked the pace of the volatility whereas 09/11 is themost relevant shock after the moderation. Second, it helps to understand the economic indicators thatcause volatility. While the main determinant of the coincident index is industrial production, VOLINXis mainly affected by employment and income. Third, it adapts the confidence bands of the forecasts.In and out-of-sample evaluations show that the confidence bands may differ up to 50% with respect to amodel with constant variance. Last, the methodology we use permits us to estimate monthly GDP, whichhas conditional volatility that is partly explained by VOLINX. These applications can be used by policymakers for monitoring and surveillance of the stress of the economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the NumbersWithNames program which performs data-mining on the Encyclopedia of Integer Sequences to find interesting conjectures in number theory. The program forms conjectures by finding empirical relationships between a sequence chosen by the user and those in the Encyclopedia. Furthermore, it transforms the chosen sequence into another set of sequences about which conjectures can also be formed. Finally, the program prunes and sorts the conjectures so that themost plausible ones are presented first. We describe here the many improvements to the previous Prolog implementation which have enabled us to provide NumbersWithNames as an online program. We also present some new results from using NumbersWithNames, including details of an automated proof plan of a conjecture NumbersWithNames helped to discover.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-assembled materials produced in the reaction between alkanethiol and Ag are characterized and compared. It is revealed that the size of the Ag substrate has a significant role in the self-assembly process and determines the reaction products. Alkanethiol adsorbs on the surface of Ag continuous planar thin films and only forms self-assembled monolayers (SAMs), while the reaction between alkanethiol and Ag clusters on inert surfaces is more aggressive and generates a significantly larger amount of alkanethiolate. Two dissimilar products are yielded depending on the size of the clusters. Small Ag clusters are more likely to be converted into multilayer silver-alkanethiolate (AgSR, R = CnH2n+1) crystals, while larger Ag clusters form monolayer-protected clusters (MPCs). The AgSR crystals are initially small and can ripen into large lamellae during thermal annealing. The crystals have facets and flat terraces with extended area, and have a strong preferred orientation in parallel with the substrate surface. The MPCs move laterally upon annealing and reorganize into a single-layer network with their separation distance approximately equal to the length of an extended alkyl chain. AgSR lamellar crystals grown on inert surfaces provide an excellent platform to study the melting characteristics of crystalline lamellae of polymeric materials with the thickness in the nanometer scale. This system is also unique in that each crystal has integer number of layers – magic-number size (thickness). The size of the crystals is controlled by adjusting the amount of Ag and the annealing temperature. X-ray diffraction (XRD) and atomic force microscopy (AFM) are combined to accurately determine the size (number of layers) of the lamellar crystals. The melting characteristics are measured with nanocalorimetry and show discrete melting transitions which are attributed to the magic-number sizes of the lamellar crystals. The discrete melting temperatures are intrinsic properties of the crystals with particular sizes. Smaller lamellar crystals with less number of layers melt at lower temperatures. The melting point depression is inversely proportional to the total thickness of the lamellae – the product of the number of layers and the layer thickness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Homomorphic encryption is a particular type of encryption method that enables computing over encrypted data. This has a wide range of real world ramifications such as being able to blindly compute a search result sent to a remote server without revealing its content. In the first part of this thesis, we discuss how database search queries can be made secure using a homomorphic encryption scheme based on the ideas of Gahi et al. Gahi’s method is based on the integer-based fully homomorphic encryption scheme proposed by Dijk et al. We propose a new database search scheme called the Homomorphic Query Processing Scheme, which can be used with the ring-based fully homomorphic encryption scheme proposed by Braserski. In the second part of this thesis, we discuss the cybersecurity of the smart electric grid. Specifically, we use the Homomorphic Query Processing scheme to construct a keyword search technique in the smart grid. Our work is based on the Public Key Encryption with Keyword Search (PEKS) method introduced by Boneh et al. and a Multi-Key Homomorphic Encryption scheme proposed by L´opez-Alt et al. A summary of the results of this thesis (specifically the Homomorphic Query Processing Scheme) is published at the 14th Canadian Workshop on Information Theory (CWIT).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with the question of when the double branched cover of an alternating knot can arise by Dehn surgery on a knot in S^3. We approach this problem using a surgery obstruction, first developed by Greene, which combines Donaldson's Diagonalization Theorem with the $d$-invariants of Ozsvath and Szabo's Heegaard Floer homology. This obstruction shows that if the double branched cover of an alternating knot or link L arises by surgery on S^3, then for any alternating diagram the lattice associated to the Goeritz matrix takes the form of a changemaker lattice. By analyzing the structure of changemaker lattices, we show that the double branched cover of L arises by non-integer surgery on S^3 if and only if L has an alternating diagram which can be obtained by rational tangle replacement on an almost-alternating diagram of the unknot. When one considers half-integer surgery the resulting tangle replacement is simply a crossing change. This allows us to show that an alternating knot has unknotting number one if and only if it has an unknotting crossing in every alternating diagram. These techniques also produce several other interesting results: they have applications to characterizing slopes of torus knots; they produce a new proof for a theorem of Tsukamoto on the structure of almost-alternating diagrams of the unknot; and they provide several bounds on surgeries producing the double branched covers of alternating knots which are direct generalizations of results previously known for lens space surgeries. Here, a rational number p/q is said to be characterizing slope for K in S^3 if the oriented homeomorphism type of the manifold obtained by p/q-surgery on K determines K uniquely. The thesis begins with an exposition of the changemaker surgery obstruction, giving an amalgamation of results due to Gibbons, Greene and the author. It then gives background material on alternating knots and changemaker lattices. The latter part of the thesis is then taken up with the applications of this theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.