898 resultados para Algoritmos Evolutivos
Resumo:
A challenge that remains in the robotics field is how to make a robot to react in real time to visual stimulus. Traditional computer vision algorithms used to overcome this problem are still very expensive taking too long when using common computer processors. Very simple algorithms like image filtering or even mathematical morphology operations may take too long. Researchers have implemented image processing algorithms in high parallelism hardware devices in order to cut down the time spent in the algorithms processing, with good results. By using hardware implemented image processing techniques and a platform oriented system that uses the Nios II Processor we propose an approach that uses the hardware processing and event based programming to simplify the vision based systems while at the same time accelerating some parts of the used algorithms
Resumo:
A modelagem de processos industriais tem auxiliado na produção e minimização de custos, permitindo a previsão dos comportamentos futuros do sistema, supervisão de processos e projeto de controladores. Ao observar os benefícios proporcionados pela modelagem, objetiva-se primeiramente, nesta dissertação, apresentar uma metodologia de identificação de modelos não-lineares com estrutura NARX, a partir da implementação de algoritmos combinados de detecção de estrutura e estimação de parâmetros. Inicialmente, será ressaltada a importância da identificação de sistemas na otimização de processos industriais, especificamente a escolha do modelo para representar adequadamente as dinâmicas do sistema. Em seguida, será apresentada uma breve revisão das etapas que compõem a identificação de sistemas. Na sequência, serão apresentados os métodos fundamentais para detecção de estrutura (Modificado Gram- Schmidt) e estimação de parâmetros (Método dos Mínimos Quadrados e Método dos Mínimos Quadrados Estendido) de modelos. No trabalho será também realizada, através dos algoritmos implementados, a identificação de dois processos industriais distintos representados por uma planta de nível didática, que possibilita o controle de nível e vazão, e uma planta de processamento primário de petróleo simulada, que tem como objetivo representar um tratamento primário do petróleo que ocorre em plataformas petrolíferas. A dissertação é finalizada com uma avaliação dos desempenhos dos modelos obtidos, quando comparados com o sistema. A partir desta avaliação, será possível observar se os modelos identificados são capazes de representar as características estáticas e dinâmicas dos sistemas apresentados nesta dissertação
Resumo:
In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.
Resumo:
Universidade Federal do Rio Grande do Norte
Resumo:
Clustering data is a very important task in data mining, image processing and pattern recognition problems. One of the most popular clustering algorithms is the Fuzzy C-Means (FCM). This thesis proposes to implement a new way of calculating the cluster centers in the procedure of FCM algorithm which are called ckMeans, and in some variants of FCM, in particular, here we apply it for those variants that use other distances. The goal of this change is to reduce the number of iterations and processing time of these algorithms without affecting the quality of the partition, or even to improve the number of correct classifications in some cases. Also, we developed an algorithm based on ckMeans to manipulate interval data considering interval membership degrees. This algorithm allows the representation of data without converting interval data into punctual ones, as it happens to other extensions of FCM that deal with interval data. In order to validate the proposed methodologies it was made a comparison between a clustering for ckMeans, K-Means and FCM algorithms (since the algorithm proposed in this paper to calculate the centers is similar to the K-Means) considering three different distances. We used several known databases. In this case, the results of Interval ckMeans were compared with the results of other clustering algorithms when applied to an interval database with minimum and maximum temperature of the month for a given year, referring to 37 cities distributed across continents
Resumo:
The use of clustering methods for the discovery of cancer subtypes has drawn a great deal of attention in the scientific community. While bioinformaticians have proposed new clustering methods that take advantage of characteristics of the gene expression data, the medical community has a preference for using classic clustering methods. There have been no studies thus far performing a large-scale evaluation of different clustering methods in this context. This work presents the first large-scale analysis of seven different clustering methods and four proximity measures for the analysis of 35 cancer gene expression data sets. Results reveal that the finite mixture of Gaussians, followed closely by k-means, exhibited the best performance in terms of recovering the true structure of the data sets. These methods also exhibited, on average, the smallest difference between the actual number of classes in the data sets and the best number of clusters as indicated by our validation criteria. Furthermore, hierarchical methods, which have been widely used by the medical community, exhibited a poorer recovery performance than that of the other methods evaluated. Moreover, as a stable basis for the assessment and comparison of different clustering methods for cancer gene expression data, this study provides a common group of data sets (benchmark data sets) to be shared among researchers and used for comparisons with new methods
Resumo:
A 3D binary image is considered well-composed if, and only if, the union of the faces shared by the foreground and background voxels of the image is a surface in R3. Wellcomposed images have some desirable topological properties, which allow us to simplify and optimize algorithms that are widely used in computer graphics, computer vision and image processing. These advantages have fostered the development of algorithms to repair bi-dimensional (2D) and three-dimensional (3D) images that are not well-composed. These algorithms are known as repairing algorithms. In this dissertation, we propose two repairing algorithms, one randomized and one deterministic. Both algorithms are capable of making topological repairs in 3D binary images, producing well-composed images similar to the original images. The key idea behind both algorithms is to iteratively change the assigned color of some points in the input image from 0 (background)to 1 (foreground) until the image becomes well-composed. The points whose colors are changed by the algorithms are chosen according to their values in the fuzzy connectivity map resulting from the image segmentation process. The use of the fuzzy connectivity map ensures that a subset of points chosen by the algorithm at any given iteration is the one with the least affinity with the background among all possible choices
Resumo:
Classifier ensembles are systems composed of a set of individual classifiers and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account since there is no gain in combining identical classification methods. The ideal situation is a set of individual classifiers with uncorrelated errors. In other words, the individual classifiers should be diverse among themselves. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. The diversity is increased because the individual classifiers will perform the same task (classification of the same input patterns) but they will be built using different subsets of patterns and/or attributes. The majority of the papers using feature selection for ensembles address the homogenous structures of ensemble, i.e., ensembles composed only of the same type of classifiers. In this investigation, two approaches of genetic algorithms (single and multi-objective) will be used to guide the distribution of the features among the classifiers in the context of homogenous and heterogeneous ensembles. The experiments will be divided into two phases that use a filter approach of feature selection guided by genetic algorithm
Resumo:
The course of Algorithms and Programming reveals as real obstacle for many students during the computer courses. The students not familiar with new ways of thinking required by the courses as well as not having certain skills required for this, encounter difficulties that sometimes result in the repetition and dropout. Faced with this problem, that survey on the problems experienced by students was conducted as a way to understand the problem and to guide solutions in trying to solve or assuage the difficulties experienced by students. In this paper a methodology to be applied in a classroom based on the concepts of Meaningful Learning of David Ausubel was described. In addition to this theory, a tool developed at UFRN, named Takkou, was used with the intent to better motivate students in algorithms classes and to exercise logical reasoning. Finally a comparative evaluation of the suggested methodology and traditional methodology was carried out, and results were discussed
Resumo:
In development of Synthetic Agents for Education, the doubt still resides about what would be a behavior that could be considered, in fact, plausible for this agent's type, which can be considered as effective on the transmission of the knowledge by the agent and the function of emotions this process. The purpose of this labor has an investigative nature in an attempt to discover what aspects are important for this behavior consistent and practical development of a chatterbot with the function of virtual tutor, within the context of learning algorithms. In this study, we explained the agents' basics, Intelligent Tutoring Systems, bots, chatterbots and how these systems need to provide credibility to report on their behavior. Models of emotions, personality and humor to computational agents are also covered, as well as previous studies by other researchers at the area. After that, the prototype is detailed, the research conducted, a summary of results achieved, the architectural model of the system, vision of computing and macro view of the features implemented.
Uma análise experimental de algoritmos exatos aplicados ao problema da árvore geradora multiobjetivo
Resumo:
The Multiobjective Spanning Tree Problem is NP-hard and models applications in several areas. This research presents an experimental analysis of different strategies used in the literature to develop exact algorithms to solve the problem. Initially, the algorithms are classified according to the approaches used to solve the problem. Features of two or more approaches can be found in some of those algorithms. The approaches investigated here are: the two-stage method, branch-and-bound, k-best and the preference-based approach. The main contribution of this research lies in the fact that no research was presented to date reporting a systematic experimental analysis of exact algorithms for the Multiobjective Spanning Tree Problem. Therefore, this work can be a basis for other research that deal with the same problem. The computational experiments compare the performance of algorithms regarding processing time, efficiency based on the number of objectives and number of solutions found in a controlled time interval. The analysis of the algorithms was performed for known instances of the problem, as well as instances obtained from a generator commonly used in the literature
Resumo:
Nonogram is a logical puzzle whose associated decision problem is NP-complete. It has applications in pattern recognition problems and data compression, among others. The puzzle consists in determining an assignment of colors to pixels distributed in a N M matrix that satisfies line and column constraints. A Nonogram is encoded by a vector whose elements specify the number of pixels in each row and column of a figure without specifying their coordinates. This work presents exact and heuristic approaches to solve Nonograms. The depth first search was one of the chosen exact approaches because it is a typical example of brute search algorithm that is easy to implement. Another implemented exact approach was based on the Las Vegas algorithm, so that we intend to investigate whether the randomness introduce by the Las Vegas-based algorithm would be an advantage over the depth first search. The Nonogram is also transformed into a Constraint Satisfaction Problem. Three heuristics approaches are proposed: a Tabu Search and two memetic algorithms. A new function to calculate the objective function is proposed. The approaches are applied on 234 instances, the size of the instances ranging from 5 x 5 to 100 x 100 size, and including logical and random Nonograms
Resumo:
The Scientific Algorithms are a new metaheuristics inspired in the scientific research process. The new method introduces the idea of theme to search the solution space of hard problems. The inspiration for this class of algorithms comes from the act of researching that comprises thinking, knowledge sharing and disclosing new ideas. The ideas of the new method are illustrated in the Traveling Salesman Problem. A computational experiment applies the proposed approach to a new variant of the Traveling Salesman Problem named Car Renter Salesman Problem. The results are compared to state-of-the-art algorithms for the latter problem
Resumo:
Symbolic Data Analysis (SDA) main aims to provide tools for reducing large databases to extract knowledge and provide techniques to describe the unit of such data in complex units, as such, interval or histogram. The objective of this work is to extend classical clustering methods for symbolic interval data based on interval-based distance. The main advantage of using an interval-based distance for interval-based data lies on the fact that it preserves the underlying imprecision on intervals which is usually lost when real-valued distances are applied. This work includes an approach allow existing indices to be adapted to interval context. The proposed methods with interval-based distances are compared with distances punctual existing literature through experiments with simulated data and real data interval
Resumo:
Este trabalho apresenta um algoritmo transgenético híbrido para a solução de um Problema de Configuração de uma Rede de Distribuição de Gás Natural. O problema da configuração dessas redes requer a definição de um traçado por onde os dutos devem ser colocados para atender aos clientes. É estudada neste trabalho uma maneira de conectar os clientes em uma rede com arquitetura em forma de árvore. O objetivo é minimizar o custo de construção da rede, mesmo que para isso alguns clientes que não proporcionam lucros deixem de ser atendidos. Esse problema pode ser formulado computacionalmente através do Problema de Steiner com Prêmios. Este é um problema de otimização combinatória da classe dos NPÁrduos. Este trabalho apresenta um algoritmo heurístico para a solução do problema. A abordagem utilizada é chamada de Algoritmos Transgenéticos, que se enquadram na categoria dos algoritmos evolucionários. Para a geração de soluções inicias é utilizado um algoritmo primaldual, e pathrelinking é usado como intensificador