959 resultados para efficient algorithms
Resumo:
Le Problème de Tournées de Véhicules (PTV) est une clé importante pour gérér efficacement des systèmes logistiques, ce qui peut entraîner une amélioration du niveau de satisfaction de la clientèle. Ceci est fait en servant plus de clients dans un temps plus court. En terme général, il implique la planification des tournées d'une flotte de véhicules de capacité donnée basée à un ou plusieurs dépôts. Le but est de livrer ou collecter une certain quantité de marchandises à un ensemble des clients géographiquement dispersés, tout en respectant les contraintes de capacité des véhicules. Le PTV, comme classe de problèmes d'optimisation discrète et de grande complexité, a été étudié par de nombreux au cours des dernières décennies. Étant donné son importance pratique, des chercheurs dans les domaines de l'informatique, de la recherche opérationnelle et du génie industrielle ont mis au point des algorithmes très efficaces, de nature exacte ou heuristique, pour faire face aux différents types du PTV. Toutefois, les approches proposées pour le PTV ont souvent été accusées d'être trop concentrées sur des versions simplistes des problèmes de tournées de véhicules rencontrés dans des applications réelles. Par conséquent, les chercheurs sont récemment tournés vers des variantes du PTV qui auparavant étaient considérées trop difficiles à résoudre. Ces variantes incluent les attributs et les contraintes complexes observés dans les cas réels et fournissent des solutions qui sont exécutables dans la pratique. Ces extensions du PTV s'appellent Problème de Tournées de Véhicules Multi-Attributs (PTVMA). Le but principal de cette thèse est d'étudier les différents aspects pratiques de trois types de problèmes de tournées de véhicules multi-attributs qui seront modélisés dans celle-ci. En plus, puisque pour le PTV, comme pour la plupart des problèmes NP-complets, il est difficile de résoudre des instances de grande taille de façon optimale et dans un temps d'exécution raisonnable, nous nous tournons vers des méthodes approcheés à base d’heuristiques.
Resumo:
La duplication est un des évènements évolutifs les plus importants, car elle peut mener à la création de nouvelles fonctions géniques. Durant leur évolution, les génomes sont aussi affectés par des inversions, des translocations (incluant des fusions et fissions de chromosomes), des transpositions et des délétions. L'étude de l'évolution des génomes est importante, notamment pour mieux comprendre les mécanismes biologiques impliqués, les types d'évènements qui sont les plus fréquents et quels étaient les contenus en gènes des espèces ancestrales. Afin d'analyser ces différents aspects de l'évolution des génomes, des algorithmes efficaces doivent être créés pour inférer des génomes ancestraux, des histoires évolutives, des relations d'homologies et pour calculer les distances entre les génomes. Dans cette thèse, quatre projets reliés à l'étude et à l'analyse de l'évolution des génomes sont présentés : 1) Nous proposons deux algorithmes pour résoudre des problèmes reliés à la duplication de génome entier : un qui généralise le problème du genome halving aux pertes de gènes et un qui permet de calculer la double distance avec pertes. 2) Nous présentons une nouvelle méthode pour l'inférence d'histoires évolutives de groupes de gènes orthologues répétés en tandem. 3) Nous proposons une nouvelle approche basée sur la théorie des graphes pour inférer des gènes in-paralogues qui considère simultanément l'information provenant de différentes espèces afin de faire de meilleures prédictions. 4) Nous présentons une étude de l'histoire évolutive des gènes d'ARN de transfert chez 50 souches de Bacillus.
Resumo:
This thesis presents a statistical framework for object recognition. The framework is motivated by the pictorial structure models introduced by Fischler and Elschlager nearly 30 years ago. The basic idea is to model an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. The problem of detecting an object in an image and the problem of learning an object model using training examples are naturally formulated under a statistical approach. We present efficient algorithms to solve these problems in our framework. We demonstrate our techniques by training models to represent faces and human bodies. The models are then used to locate the corresponding objects in novel images.
Resumo:
This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EMand the Minimum Spanning Tree algorithm to find the ML and MAP mixtureof trees for a variety of priors, including the Dirichlet and the MDL priors.
Resumo:
This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. We also show that the single tree classifier acts like an implicit feature selector, thus making the classification performance insensitive to irrelevant attributes. Experimental results demonstrate the excellent performance of the new model both in density estimation and in classification.
Resumo:
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell-Hestenes-Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.
Resumo:
A presente dissertação tem como objetivo estudar e aprimorar métodos de projetos de controladores para sistemas de potência, sendo que esse trabalho trata da estabilidade dinâmica de sistemas de potência e, portanto, do projeto de controladores amortecedores de oscilações eletromecânicas para esses sistemas. A escolha dos métodos aqui estudados foi orientada pelos requisitos que um estabilizador de sistemas de potência (ESP) deve ter, que são robustez, descentralização e coordenação. Sendo que alguns deles tiveram suas características aprimoradas para atender a esses requisitos. A abordagem dos métodos estudados foi restringida à análise no domínio tempo, pois a abordagem temporal facilita a modelagem das incertezas paramétricas, para atender ao requisito da robustez, e também permite a formulação do controle descentralizado de maneira simples. Além disso, a abordagem temporal permite a formulação do problema de projeto utilizando desigualdades matriciais lineares (LMI’s), as quais possuem como vantagem o fato do conjunto solução ser sempre convexo e a existência de algoritmos eficientes para o cálculo de sua solução. De fato, existem diversos pacotes computacionais desenvolvidos no mercado para o cálculo da solução de um problema de inequações matriciais lineares. Por esse motivo, os métodos de projeto para controladores de saída buscam sempre colocar o problema na forma de LMI’s, tendo em vista que ela garante a obtenção de solução, caso essa solução exista.
Resumo:
In this work we presented an exhibition of the mathematical theory of orthogonal compact support wavelets in the context of multiresoluction analysis. These are particularly attractive wavelets because they lead to a stable and very efficient algorithm, that is Fast Transform Wavelet (FWT). One of our objectives is to develop efficient algorithms for calculating the coefficients wavelet (FWT) through the pyramid algorithm of Mallat and to discuss his connection with filters Banks. We also studied the concept of multiresoluction analysis, that is the context in that wavelets can be understood and built naturally, taking an important step in the change from the Mathematical universe (Continuous Domain) for the Universe of the representation (Discret Domain)
Resumo:
In the spatial electric load forecasting, the future land use determination is one of the most important tasks, and one of the most difficult, because of the stochastic nature of the city growth. This paper proposes a fast and efficient algorithm to find out the future land use for the vacant land in the utility service area, using ideas from knowledge extraction and evolutionary algorithms. The methodology was implemented into a full simulation software for spatial electric load forecasting, showing a high rate of success when the results are compared to information gathered from specialists. The importance of this methodology lies in the reduced set of data needed to perform the task and the simplicity for implementation, which is a great plus for most of the electric utilities without specialized tools for this planning activity. © 2008 IEEE.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
[EN] We propose four algorithms for computing the inverse optical flow between two images. We assume that the forward optical flow has already been obtained and we need to estimate the flow in the backward direction. The forward and backward flows can be related through a warping formula, which allows us to propose very efficient algorithms. These are presented in increasing order of complexity. The proposed methods provide high accuracy with low memory requirements and low running times.In general, the processing reduces to one or two image passes. Typically, when objects move in a sequence, some regions may appear or disappear. Finding the inverse flows in these situations is difficult and, in some cases, it is not possible to obtain a correct solution. Our algorithms deal with occlusions very easy and reliably. On the other hand, disocclusions have to be overcome as a post-processing step. We propose three approaches for filling disocclusions. In the experimental results, we use standard synthetic sequences to study the performance of the proposed methods, and show that they yield very accurate solutions. We also analyze the performance of the filling strategies.
Resumo:
Being basic ingredients of numerous daily-life products with significant industrial importance as well as basic building blocks for biomaterials, charged hydrogels continue to pose a series of unanswered challenges for scientists even after decades of practical applications and intensive research efforts. Despite a rather simple internal structure it is mainly the unique combination of short- and long-range forces which render scientific investigations of their characteristic properties to be quite difficult. Hence early on computer simulations were used to link analytical theory and empirical experiments, bridging the gap between the simplifying assumptions of the models and the complexity of real world measurements. Due to the immense numerical effort, even for high performance supercomputers, system sizes and time scales were rather restricted until recently, whereas it only now has become possible to also simulate a network of charged macromolecules. This is the topic of the presented thesis which investigates one of the fundamental and at the same time highly fascinating phenomenon of polymer research: The swelling behaviour of polyelectrolyte networks. For this an extensible simulation package for the research on soft matter systems, ESPResSo for short, was created which puts a particular emphasis on mesoscopic bead-spring-models of complex systems. Highly efficient algorithms and a consistent parallelization reduced the necessary computation time for solving equations of motion even in case of long-ranged electrostatics and large number of particles, allowing to tackle even expensive calculations and applications. Nevertheless, the program has a modular and simple structure, enabling a continuous process of adding new potentials, interactions, degrees of freedom, ensembles, and integrators, while staying easily accessible for newcomers due to a Tcl-script steering level controlling the C-implemented simulation core. Numerous analysis routines provide means to investigate system properties and observables on-the-fly. Even though analytical theories agreed on the modeling of networks in the past years, our numerical MD-simulations show that even in case of simple model systems fundamental theoretical assumptions no longer apply except for a small parameter regime, prohibiting correct predictions of observables. Applying a "microscopic" analysis of the isolated contributions of individual system components, one of the particular strengths of computer simulations, it was then possible to describe the behaviour of charged polymer networks at swelling equilibrium in good solvent and close to the Theta-point by introducing appropriate model modifications. This became possible by enhancing known simple scaling arguments with components deemed crucial in our detailed study, through which a generalized model could be constructed. Herewith an agreement of the final system volume of swollen polyelectrolyte gels with results of computer simulations could be shown successfully over the entire investigated range of parameters, for different network sizes, charge fractions, and interaction strengths. In addition, the "cell under tension" was presented as a self-regulating approach for predicting the amount of swelling based on the used system parameters only. Without the need for measured observables as input, minimizing the free energy alone already allows to determine the the equilibrium behaviour. In poor solvent the shape of the network chains changes considerably, as now their hydrophobicity counteracts the repulsion of like-wise charged monomers and pursues collapsing the polyelectrolytes. Depending on the chosen parameters a fragile balance emerges, giving rise to fascinating geometrical structures such as the so-called pear-necklaces. This behaviour, known from single chain polyelectrolytes under similar environmental conditions and also theoretically predicted, could be detected for the first time for networks as well. An analysis of the total structure factors confirmed first evidences for the existence of such structures found in experimental results.
Resumo:
Various inference procedures for linear regression models with censored failure times have been studied extensively. Recent developments on efficient algorithms to implement these procedures enhance the practical usage of such models in survival analysis. In this article, we present robust inferences for certain covariate effects on the failure time in the presence of "nuisance" confounders under a semiparametric, partial linear regression setting. Specifically, the estimation procedures for the regression coefficients of interest are derived from a working linear model and are valid even when the function of the confounders in the model is not correctly specified. The new proposals are illustrated with two examples and their validity for cases with practical sample sizes is demonstrated via a simulation study.
Resumo:
This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.
Resumo:
We study a problem about shortest paths in Delaunay triangulations. Given two nodes s; t in the Delaunay triangulation of a point set P, we look for a new point p that can be added, such that the shortest path from s to t in the Delaunay triangulation of P u{p} improves as much as possible. We study properties of the problem and give efficient algorithms to find such a point when the graph-distance used is Euclidean and for the link-distance. Several other variations of the problem are also discussed.