413 resultados para PROOFS
Resumo:
Frequent episode discovery is a popular framework for pattern discovery from sequential data. It has found many applications in domains like alarm management in telecommunication networks, fault analysis in the manufacturing plants, predicting user behavior in web click streams and so on. In this paper, we address the discovery of serial episodes. In the episodes context, there have been multiple ways to quantify the frequency of an episode. Most of the current algorithms for episode discovery under various frequencies are apriori-based level-wise methods. These methods essentially perform a breadth-first search of the pattern space. However currently there are no depth-first based methods of pattern discovery in the frequent episode framework under many of the frequency definitions. In this paper, we try to bridge this gap. We provide new depth-first based algorithms for serial episode discovery under non-overlapped and total frequencies. Under non-overlapped frequency, we present algorithms that can take care of span constraint and gap constraint on episode occurrences. Under total frequency we present an algorithm that can handle span constraint. We provide proofs of correctness for the proposed algorithms. We demonstrate the effectiveness of the proposed algorithms by extensive simulations. We also give detailed run-time comparisons with the existing apriori-based methods and illustrate scenarios under which the proposed pattern-growth algorithms perform better than their apriori counterparts. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Construction of high rate Space Time Block Codes (STBCs) with low decoding complexity has been studied widely using techniques such as sphere decoding and non Maximum-Likelihood (ML) decoders such as the QR decomposition decoder with M paths (QRDM decoder). Recently Ren et al., presented a new class of STBCs known as the block orthogonal STBCs (BOSTBCs), which could be exploited by the QRDM decoders to achieve significant decoding complexity reduction without performance loss. The block orthogonal property of the codes constructed was however only shown via simulations. In this paper, we give analytical proofs for the block orthogonal structure of various existing codes in literature including the codes constructed in the paper by Ren et al. We show that codes formed as the sum of Clifford Unitary Weight Designs (CUWDs) or Coordinate Interleaved Orthogonal Designs (CIODs) exhibit block orthogonal structure. We also provide new construction of block orthogonal codes from Cyclic Division Algebras (CDAs) and Crossed-Product Algebras (CPAs). In addition, we show how the block orthogonal property of the STBCs can be exploited to reduce the decoding complexity of a sphere decoder using a depth first search approach. Simulation results of the decoding complexity show a 30% reduction in the number of floating point operations (FLOPS) of BOSTBCs as compared to STBCs without the block orthogonal structure.
Resumo:
Rainbow connection number, rc(G), of a connected graph G is the minimum number of colors needed to color its edges so that every pair of vertices is connected by at least one path in which no two edges are colored the same (note that the coloring need not be proper). In this paper we study the rainbow connection number with respect to three important graph product operations (namely the Cartesian product, the lexicographic product and the strong product) and the operation of taking the power of a graph. In this direction, we show that if G is a graph obtained by applying any of the operations mentioned above on non-trivial graphs, then rc(G) a parts per thousand currency sign 2r(G) + c, where r(G) denotes the radius of G and . In general the rainbow connection number of a bridgeless graph can be as high as the square of its radius 1]. This is an attempt to identify some graph classes which have rainbow connection number very close to the obvious lower bound of diameter (and thus the radius). The bounds reported are tight up to additive constants. The proofs are constructive and hence yield polynomial time -factor approximation algorithms.
Resumo:
We present a new Hessian estimator based on the simultaneous perturbation procedure, that requires three system simulations regardless of the parameter dimension. We then present two Newton-based simulation optimization algorithms that incorporate this Hessian estimator. The two algorithms differ primarily in the manner in which the Hessian estimate is used. Both our algorithms do not compute the inverse Hessian explicitly, thereby saving on computational effort. While our first algorithm directly obtains the product of the inverse Hessian with the gradient of the objective, our second algorithm makes use of the Sherman-Morrison matrix inversion lemma to recursively estimate the inverse Hessian. We provide proofs of convergence for both our algorithms. Next, we consider an interesting application of our algorithms on a problem of road traffic control. Our algorithms are seen to exhibit better performance than two Newton algorithms from a recent prior work.
Resumo:
We define two general classes of nonabelian sandpile models on directed trees (or arborescences), as models of nonequilibrium statistical physics. Unlike usual applications of the well-known abelian sandpile model, these models have the property that sand grains can enter only through specified reservoirs. In the Trickle-down sandpile model, sand grains are allowed to move one at a time. For this model, we show that the stationary distribution is of product form. In the Landslide sandpile model, all the grains at a vertex topple at once, and here we prove formulas for all eigenvalues, their multiplicities, and the rate of convergence to stationarity. The proofs use wreath products and the representation theory of monoids.
Resumo:
In this article, we survey several kinds of trace formulas that one encounters in the theory of single and multi-variable operators. We give some sketches of the proofs, often based on the principle of finite-dimensional approximations to the objects at hand in the formulas.
Resumo:
In this paper, we study some degenerate parabolic equation with Cauchy-Dirichlet boundary conditions. This problem is considered in little Holder spaces. The optimal regularity of the solution v is obtained and is specified in terms of those of the second member when some conditions upon the Holder exponent with respect to the degeneracy are satisfied. The proofs mainly use the sum theory of linear operators with or without density of domains and the results of smoothness obtained in the study of some abstract linear differential equations of elliptic type.
Resumo:
A tensão existente entre o direito à investigação do Poder Legislativo e as garantias constitucionais dos investigados, durante o processo de produção de provas concernentes à quebra de sigilo bancário, reclamava a realização de pesquisa que identificasse a origem dessa prerrogativa das CPIs e que descrevesse as suas principais características, bem como os usos que se fazem das informações sobre as movimentações bancárias dos investigados. Assim, analisaram-se os requerimentos de quebra de sigilo e os relatórios da CPI do Narcotráfico e daquelas comissões constituídas durante a 52ª legislatura. Constatou-se, com isso, que a referida prerrogativa foi concebida no âmbito da Câmara dos Deputados, em 1964, e que o principal uso dessas informações se referiu à fundamentação dos indiciamentos sugeridos ao Ministério Público. Em alguns deles, a quebra de sigilo ocorreu de forma indireta e incidental, na medida em que as informações bancárias de um outro investigado revelaram o envolvimento daqueles. Porém, determinados indiciamentos prescindiram da análise das informações bancárias dos investigados. Constatou-se, ainda, que o relatório final manteve-se silente no que se refere ao uso das informações bancárias de determinados investigados, não fazendo sobre eles qualquer menção.
Resumo:
[ES] A fin de garantizar el aprovechamiento deun recurso renovable como la madera, en un momento de retroceso forestal y escasez de materiales, los habitantes de la provincia de Guipúzcoa, ante lo exiguo de su territorio, arbitraron un sistema que permitió combinar las necesidades y demandas de actividades tan dispares como la ganadería, el consumo doméstico, la siderurgia o la construcción naval. El presente artículo pretende analizar el origen, desarrollo y desaparición de los trasmochos guiados y describir su técnica en el territorio guipuzcoano. A falta de mayores evidencias, parece que la técnica del trasmochado o desmochado guiado inició su andadura en la Baja Edad Media, aunque hasta las primeras décadas del siglo XVI no existen datos documentales de su utilización en territorio guipuzcoano. Su generalización en todo el territorio guipuzcoano no parece producirse definitivamente hasta finales del siglo XVII, aunque para entonces se venía aplicando en la costa y el sector oriental de la reclamaciones de las autoridades reales y territoriales, la obligación de dejar horca y pendón se encontró con la oposición de carboneros y ferrones, quienes trasmochaban los árboles pero sin guiarlos, perjudicando de ese modo a las autoridades e intereses de la Marina Real. Precisamente el incumplimiento de las ordenanzas fue lo que provocó la aparición de dos modelos, con usos diferenciados: trasmochos sin guiar y trasmochos guiados. A lo largo del siglo XIX dicha técnica se fue perdiendo, coincidiendo con la paulatina desaparición de la construcción naval en madera.
Resumo:
Quantitative proofs for the occurrence of juvenile cods in the Bornholm Sea, which had been spawned in Kiel Bay or Mecklenburg Bay, were the staring point to develop a method, which makes quantitative estimates possible. From these analyses it could be estimated that 20 to 50 % of the cods in the area of the eastern Baltic stock had been spawned in the western Baltic Sea. This expansion in eastern direction was determined partly by passive transport of pelagic cod stages. The main factor was an active migration of cod in the first year of their life. These analyses suggest the necessity to reassess the actual model of the relations between the Baltic cod stocks.
Resumo:
I. Existence and Structure of Bifurcation Branches
The problem of bifurcation is formulated as an operator equation in a Banach space, depending on relevant control parameters, say of the form G(u,λ) = 0. If dimN(G_u(u_O,λ_O)) = m the method of Lyapunov-Schmidt reduces the problem to the solution of m algebraic equations. The possible structure of these equations and the various types of solution behaviour are discussed. The equations are normally derived under the assumption that G^O_λεR(G^O_u). It is shown, however, that if G^O_λεR(G^O_u) then bifurcation still may occur and the local structure of such branches is determined. A new and compact proof of the existence of multiple bifurcation is derived. The linearized stability near simple bifurcation and "normal" limit points is then indicated.
II. Constructive Techniques for the Generation of Solution Branches
A method is described in which the dependence of the solution arc on a naturally occurring parameter is replaced by the dependence on a form of pseudo-arclength. This results in continuation procedures through regular and "normal" limit points. In the neighborhood of bifurcation points, however, the associated linear operator is nearly singular causing difficulty in the convergence of continuation methods. A study of the approach to singularity of this operator yields convergence proofs for an iterative method for determining the solution arc in the neighborhood of a simple bifurcation point. As a result of these considerations, a new constructive proof of bifurcation is determined.
Resumo:
Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.
This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.
One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.
One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.
Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.
As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.
Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.
We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.
We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.
In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.
Resumo:
Let l be any odd prime, and ζ a primitive l-th root of unity. Let C_l be the l-Sylow subgroup of the ideal class group of Q(ζ). The Teichmüller character w : Z_l → Z^*_l is given by w(x) = x (mod l), where w(x) is a p-1-st root of unity, and x ∈ Z_l. Under the action of this character, C_l decomposes as a direct sum of C^((i))_l, where C^((i))_l is the eigenspace corresponding to w^i. Let the order of C^((3))_l be l^h_3). The main result of this thesis is the following: For every n ≥ max( 1, h_3 ), the equation x^(ln) + y^(ln) + z^(ln) = 0 has no integral solutions (x,y,z) with l ≠ xyz. The same result is also proven with n ≥ max(1,h_5), under the assumption that C_l^((5)) is a cyclic group of order l^h_5. Applications of the methods used to prove the above results to the second case of Fermat's last theorem and to a Fermat-like equation in four variables are given.
The proof uses a series of ideas of H.S. Vandiver ([Vl],[V2]) along with a theorem of M. Kurihara [Ku] and some consequences of the proof of lwasawa's main conjecture for cyclotomic fields by B. Mazur and A. Wiles [MW]. In [V1] Vandiver claimed that the first case of Fermat's Last Theorem held for l if l did not divide the class number h^+ of the maximal real subfield of Q(e^(2πi/i)). The crucial gap in Vandiver's attempted proof that has been known to experts is explained, and complete proofs of all the results used from his papers are given.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
The simplest multiplicative systems in which arithmetical ideas can be defined are semigroups. For such systems irreducible (prime) elements can be introduced and conditions under which the fundamental theorem of arithmetic holds have been investigated (Clifford (3)). After identifying associates, the elements of the semigroup form a partially ordered set with respect to the ordinary division relation. This suggests the possibility of an analogous arithmetical result for abstract partially ordered sets. Although nothing corresponding to product exists in a partially ordered set, there is a notion similar to g.c.d. This is the meet operation, defined as greatest lower bound. Thus irreducible elements, namely those elements not expressible as meets of proper divisors can be introduced. The assumption of the ascending chain condition then implies that each element is representable as a reduced meet of irreducibles. The central problem of this thesis is to determine conditions on the structure of the partially ordered set in order that each element have a unique such representation.
Part I contains preliminary results and introduces the principal tools of the investigation. In the second part, basic properties of the lattice of ideals and the connection between its structure and the irreducible decompositions of elements are developed. The proofs of these results are identical with the corresponding ones for the lattice case (Dilworth (2)). The last part contains those results whose proofs are peculiar to partially ordered sets and also contains the proof of the main theorem.