899 resultados para Combinatorial Algorithms
Resumo:
This thesis studies properties and applications of different generalized Appell polynomials in the framework of Clifford analysis. As an example of 3D-quasi-conformal mappings realized by generalized Appell polynomials, an analogue of the complex Joukowski transformation of order two is introduced. The consideration of a Pascal n-simplex with hypercomplex entries allows stressing the combinatorial relevance of hypercomplex Appell polynomials. The concept of totally regular variables and its relation to generalized Appell polynomials leads to the construction of new bases for the space of homogeneous holomorphic polynomials whose elements are all isomorphic to the integer powers of the complex variable. For this reason, such polynomials are called pseudo-complex powers (PCP). Different variants of them are subject of a detailed investigation. Special attention is paid to the numerical aspects of PCP. An efficient algorithm based on complex arithmetic is proposed for their implementation. In this context a brief survey on numerical methods for inverting Vandermonde matrices is presented and a modified algorithm is proposed which illustrates advantages of a special type of PCP. Finally, combinatorial applications of generalized Appell polynomials are emphasized. The explicit expression of the coefficients of a particular type of Appell polynomials and their relation to a Pascal simplex with hypercomplex entries are derived. The comparison of two types of 3D Appell polynomials leads to the detection of new trigonometric summation formulas and combinatorial identities of Riordan-Sofo type characterized by their expression in terms of central binomial coefficients.
Resumo:
Nesta tese abordam-se várias formulações e diferentes métodos para resolver o Problema da Árvore de Suporte de Custo Mínimo com Restrições de Peso (WMST – Weight-constrained Minimum Spanning Tree Problem). Este problema, com aplicações no desenho de redes de comunicações e telecomunicações, é um problema de Otimização Combinatória NP-difícil. O Problema WMST consiste em determinar, numa rede com custos e pesos associados às arestas, uma árvore de suporte de custo mínimo de tal forma que o seu peso total não exceda um dado limite especificado. Apresentam-se e comparam-se várias formulações para o problema. Uma delas é usada para desenvolver um procedimento com introdução de cortes baseado em separação e que se tornou bastante útil na obtenção de soluções para o problema. Tendo como propósito fortalecer as formulações apresentadas, introduzem-se novas classes de desigualdades válidas que foram adaptadas das conhecidas desigualdades de cobertura, desigualdades de cobertura estendida e desigualdades de cobertura levantada. As novas desigualdades incorporam a informação de dois conjuntos de soluções: o conjunto das árvores de suporte e o conjunto saco-mochila. Apresentam-se diversos algoritmos heurísticos de separação que nos permitem usar as desigualdades válidas propostas de forma eficiente. Com base na decomposição Lagrangeana, apresentam-se e comparam-se algoritmos simples, mas eficientes, que podem ser usados para calcular limites inferiores e superiores para o valor ótimo do WMST. Entre eles encontram-se dois novos algoritmos: um baseado na convexidade da função Lagrangeana e outro que faz uso da inclusão de desigualdades válidas. Com o objetivo de obter soluções aproximadas para o Problema WMST usam-se métodos heurísticos para encontrar uma solução inteira admissível. Os métodos heurísticos apresentados são baseados nas estratégias Feasibility Pump e Local Branching. Apresentam-se resultados computacionais usando todos os métodos apresentados. Os resultados mostram que os diferentes métodos apresentados são bastante eficientes para encontrar soluções para o Problema WMST.
Optimised search heuristics: combining metaheuristics and exact methods to solve scheduling problems
Resumo:
Tese dout., Matemática, Investigação Operacional, Universidade do Algarve, 2009
Resumo:
Complete supervised training algorithms for B-spline neural networks and fuzzy rule-based systems are discussed. By interducing the relationship between B-spline neural networks and certain types of fuzzy models, training algorithms developed initially for neural networks can be adapted by fuzzy systems.
Resumo:
Rapid developments in microelectronics and computer science continue to fuel new opportunities for real-time control engineers. The ever-increasing system complexity and sophistication, environmental legislation, economic competition, safety and reliability constitute some of the driving forces for the research themes presented at the IFAC Workshop on Algorithms and Architectures for Real-Time Control (AARTC'2000). The Spanish Society for Automatic Control hosted AARTC'2000, which was held at Palma de Maiorca, Spain, from 15 to 17 May. This workshop was the sixth in the series.
Resumo:
The problem with the adequacy of radial basis function neural networks to model the inside air temperature as a function of the outside air temperature and solar radiation, and the inside relative humidity in an hydroponic greenhouse is addressed.
Resumo:
The introduction of parallel processing architectures allowed the real time impelemtation of more sophisticated control algorithms with tighter specifications in terms of sampling time. However, to take advantage of the processing power of these architectures the control engeneer, due to the lack of appropriate tools, must spend a considerable amount of time in the parallelizaton of the control algorithm.
Resumo:
Algorithm and Architectures for Real-Time Control Workshop had the objective to investigate the state of the art and to present new research and application results in software and hardware for real-timecontrol, as well as to bring together engeneers and computer scientists who are researchers, developers and practitioners, both from the academic and the industrial world.
Resumo:
The normal design process for neural networks or fuzzy systems involve two different phases: the determination of the best topology, which can be seen as a system identification problem, and the determination of its parameters, which can be envisaged as a parameter estimation problem. This latter issue, the determination of the model parameters (linear weights and interior knots) is the simplest task and is usually solved using gradient or hybrid schemes. The former issue, the topology determination, is an extremely complex task, especially if dealing with real-world problems.
Resumo:
One of the crucial problems of fuzzy rule modeling is how to find an optimal or at least a quasi-optimal rule base fro a certain system. In most applications there is no human expert available, or, the result of a human expert's decision is too much subjective and is not reproducible, thus some automatic method to determine the fuzzy rule base must be deployed.
Resumo:
All systems found in nature exhibit, with different degrees, a nonlinear behavior. To emulate this behavior, classical systems identification techniques use, typically, linear models, for mathematical simplicity. Models inspired by biological principles (artificial neural networks) and linguistically motivated (fuzzy systems), due to their universal approximation property, are becoming alternatives to classical mathematical models. In systems identification, the design of this type of models is an iterative process, requiring, among other steps, the need to identify the model structure, as well as the estimation of the model parameters. This thesis addresses the applicability of gradient-basis algorithms for the parameter estimation phase, and the use of evolutionary algorithms for model structure selection, for the design of neuro-fuzzy systems, i.e., models that offer the transparency property found in fuzzy systems, but use, for their design, algorithms introduced in the context of neural networks. A new methodology, based on the minimization of the integral of the error, and exploiting the parameter separability property typically found in neuro-fuzzy systems, is proposed for parameter estimation. A recent evolutionary technique (bacterial algorithms), based on the natural phenomenon of microbial evolution, is combined with genetic programming, and the resulting algorithm, bacterial programming, advocated for structure determination. Different versions of this evolutionary technique are combined with gradient-based algorithms, solving problems found in fuzzy and neuro-fuzzy design, namely incorporation of a-priori knowledge, gradient algorithms initialization and model complexity reduction.
Resumo:
In this paper, the performance and convergence time comparisons of various low-complexity LMS algorithms used for the coefficient update of adaptive I/Q corrector for quadrature receivers are presented. We choose the optimum LMS algorithm suitable for low complexity, high performance and high order QAM and PSK constellations. What is more, influence of the finite bit precision on VLSI implementation of such algorithms is explored through extensive simulations and optimum wordlengths established.
Resumo:
Thesis (Master's)--University of Washington, 2016-03
Resumo:
In this article we provide brief descriptions of three classes of schedulers: Operating Systems Process Schedulers, Cluster Systems, Jobs Schedulers and Big Data Schedulers. We describe their evolution from early adoptions to modern implementations, considering both the use and features of algorithms. In summary, we discuss differences between all presented classes of schedulers and discuss their chronological development. In conclusion, we highlight similarities in the focus of scheduling strategies design, applicable to both local and distributed systems.