14 resultados para graph matching algorithms

em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report is an introduction to the concept of treewidth, a property of graphs that has important implications in algorithms. Some basic concepts of graph theory are presented in the first chapter for those readers that are not familiar with the notation. In Chapter 2, the definition of treewidth and some different ways of characterizing it are explained. The last two chapters focus on the algorithmic implications of treewidth, which are very relevant in Computer Science. An algorithm to compute the treewidth of a graph is presented and its result can be later applied to many other problems in graph theory, like those introduced in the last chapter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[ES]La fibrilación ventricular (VF) es el primer ritmo registrado en el 40\,\% de las muertes súbitas por paro cardiorrespiratorio extrahospitalario (PCRE). El único tratamiento eficaz para la FV es la desfibrilación mediante una descarga eléctrica. Fuera del hospital, la descarga se administra mediante un desfibrilador externo automático (DEA), que previamente analiza el electrocardiograma (ECG) del paciente y comprueba si presenta un ritmo desfibrilable. La supervivencia en un caso de PCRE depende fundamentalmente de dos factores: la desfibrilación temprana y la resucitación cardiopulmonar (RCP) temprana, que prolonga la FV y por lo tanto la oportunidad de desfibrilación. Para un correcto análisis del ritmo cardiaco es necesario interrumpir la RCP, ya que, debido a las compresiones torácicas, la RCP introduce artefactos en el ECG. Desafortunadamente, la interrupción de la RCP afecta negativamente al éxito en la desfibrilación. En 2003 se aprobó el uso del DEA en pacientes entre 1 y 8 años. Los DEA, que originalmente se diseñaron para pacientes adultos, deben discriminar de forma precisa las arritmias pediátricas para que su uso en niños sea seguro. Varios DEAs se han adaptado para uso pediátrico, bien demostrando la precisión de los algoritmos para adultos con arritmias pediátricas, o bien mediante algoritmos específicos para arritmias pediátricas. Esta tesis presenta un nuevo algoritmo DEA diseñado conjuntamente para pacientes adultos y pediátricos. El algoritmo se ha probado exhaustivamente en bases de datos acordes a los requisitos de la American Heart Association (AHA), y en registros de resucitación con y sin artefacto RCP. El trabajo comenzó con una larga fase experimental en la que se recopilaron y clasificaron retrospectivamente un total de 1090 ritmos pediátricos. Además, se revisó una base de arritmias de adultos y se añadieron 928 nuevos ritmos de adultos. La base de datos final contiene 2782 registros, 1270 se usaron para diseñar el algoritmo y 1512 para validarlo. A continuación, se diseñó un nuevo algoritmo DEA compuesto de cuatro subalgoritmos. Estos subalgoritmos están basados en un conjunto de nuevos parámetros para la detección de arritmias, calculados en diversos dominios de la señal, como el tiempo, la frecuencia, la pendiente o la función de autocorrelación. El algoritmo cumple las exigencias de la AHA para la detección de ritmos desfibrilables y no-desfibrilables tanto en pacientes adultos como en pediátricos. El trabajo concluyó con el análisis del comportamiento del algoritmo con episodios reales de resucitación. En los ritmos que no contenían artefacto RCP se cumplieron las exigencias de la AHA. Posteriormente, se estudió la precisión del algoritmo durante las compresiones torácicas, antes y después de filtrar el artefacto RCP. Para suprimir el artefacto se utilizó un nuevo método desarrollado a lo largo de la tesis. Los ritmos desfibrilables se detectaron de forma precisa tras el filtrado, los no-desfibrilables sin embargo no.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

373 p. : il., gráf., fot., tablas

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study developed a framework for the shape optimization of aerodynamics profiles using computational fluid dynamics (CFD) and genetic algorithms. Agenetic algorithm code and a commercial CFD code were integrated to develop a CFD shape optimization tool. The results obtained demonstrated the effectiveness of the developed tool. The shape optimization of airfoils was studied using different strategies to demonstrate the capacity of this tool with different GA parameter combinations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer vision algorithms that use color information require color constant images to operate correctly. Color constancy of the images is usually achieved in two steps: first the illuminant is detected and then image is transformed with the chromatic adaptation transform ( CAT). Existing CAT methods use a single transformation matrix for all the colors of the input image. The method proposed in this paper requires multiple corresponding color pairs between source and target illuminants given by patches of the Macbeth color checker. It uses Delaunay triangulation to divide the color gamut of the input image into small triangles. Each color of the input image is associated with the triangle containing the color point and transformed with a full linear model associated with the triangle. Full linear model is used because diagonal models are known to be inaccurate if channel color matching functions do not have narrow peaks. Objective evaluation showed that the proposed method outperforms existing CAT methods by more than 21%; that is, it performs statistically significantly better than other existing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes Mateda-2.0, a MATLAB package for estimation of distribution algorithms (EDAs). This package can be used to solve single and multi-objective discrete and continuous optimization problems using EDAs based on undirected and directed probabilistic graphical models. The implementation contains several methods commonly employed by EDAs. It is also conceived as an open package to allow users to incorporate different combinations of selection, learning, sampling, and local search procedures. Additionally, it includes methods to extract, process and visualize the structures learned by the probabilistic models. This way, it can unveil previously unknown information about the optimization problem domain. Mateda-2.0 also incorporates a module for creating and validating function models based on the probabilistic models learned by EDAs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, probability models on rankings have been proposed in the field of estimation of distribution algorithms in order to solve permutation-based combinatorial optimisation problems. Particularly, distance-based ranking models, such as Mallows and Generalized Mallows under the Kendall’s-t distance, have demonstrated their validity when solving this type of problems. Nevertheless, there are still many trends that deserve further study. In this paper, we extend the use of distance-based ranking models in the framework of EDAs by introducing new distance metrics such as Cayley and Ulam. In order to analyse the performance of the Mallows and Generalized Mallows EDAs under the Kendall, Cayley and Ulam distances, we run them on a benchmark of 120 instances from four well known permutation problems. The conducted experiments showed that there is not just one metric that performs the best in all the problems. However, the statistical test pointed out that Mallows-Ulam EDA is the most stable algorithm among the studied proposals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is aimed at optimizing the wind turbine rotor speed setpoint algorithm. Several intelligent adjustment strategies have been investigated in order to improve a reward function that takes into account the power captured from the wind and the turbine speed error. After different approaches including Reinforcement Learning, the best results were obtained using a Particle Swarm Optimization (PSO)-based wind turbine speed setpoint algorithm. A reward improvement of up to 10.67% has been achieved using PSO compared to a constant approach and 0.48% compared to a conventional approach. We conclude that the pitch angle is the most adequate input variable for the turbine speed setpoint algorithm compared to others such as rotor speed, or rotor angular acceleration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer vision algorithms that use color information require color constant images to operate correctly. Color constancy of the images is usually achieved in two steps: first the illuminant is detected and then image is transformed with the chromatic adaptation transform ( CAT). Existing CAT methods use a single transformation matrix for all the colors of the input image. The method proposed in this paper requires multiple corresponding color pairs between source and target illuminants given by patches of the Macbeth color checker. It uses Delaunay triangulation to divide the color gamut of the input image into small triangles. Each color of the input image is associated with the triangle containing the color point and transformed with a full linear model associated with the triangle. Full linear model is used because diagonal models are known to be inaccurate if channel color matching functions do not have narrow peaks. Objective evaluation showed that the proposed method outperforms existing CAT methods by more than 21%; that is, it performs statistically significantly better than other existing methods.