80 resultados para Eficácia de algoritmos
Resumo:
Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity
Resumo:
The use of clustering methods for the discovery of cancer subtypes has drawn a great deal of attention in the scientific community. While bioinformaticians have proposed new clustering methods that take advantage of characteristics of the gene expression data, the medical community has a preference for using classic clustering methods. There have been no studies thus far performing a large-scale evaluation of different clustering methods in this context. This work presents the first large-scale analysis of seven different clustering methods and four proximity measures for the analysis of 35 cancer gene expression data sets. Results reveal that the finite mixture of Gaussians, followed closely by k-means, exhibited the best performance in terms of recovering the true structure of the data sets. These methods also exhibited, on average, the smallest difference between the actual number of classes in the data sets and the best number of clusters as indicated by our validation criteria. Furthermore, hierarchical methods, which have been widely used by the medical community, exhibited a poorer recovery performance than that of the other methods evaluated. Moreover, as a stable basis for the assessment and comparison of different clustering methods for cancer gene expression data, this study provides a common group of data sets (benchmark data sets) to be shared among researchers and used for comparisons with new methods
Resumo:
A 3D binary image is considered well-composed if, and only if, the union of the faces shared by the foreground and background voxels of the image is a surface in R3. Wellcomposed images have some desirable topological properties, which allow us to simplify and optimize algorithms that are widely used in computer graphics, computer vision and image processing. These advantages have fostered the development of algorithms to repair bi-dimensional (2D) and three-dimensional (3D) images that are not well-composed. These algorithms are known as repairing algorithms. In this dissertation, we propose two repairing algorithms, one randomized and one deterministic. Both algorithms are capable of making topological repairs in 3D binary images, producing well-composed images similar to the original images. The key idea behind both algorithms is to iteratively change the assigned color of some points in the input image from 0 (background)to 1 (foreground) until the image becomes well-composed. The points whose colors are changed by the algorithms are chosen according to their values in the fuzzy connectivity map resulting from the image segmentation process. The use of the fuzzy connectivity map ensures that a subset of points chosen by the algorithm at any given iteration is the one with the least affinity with the background among all possible choices
Resumo:
Classifier ensembles are systems composed of a set of individual classifiers and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account since there is no gain in combining identical classification methods. The ideal situation is a set of individual classifiers with uncorrelated errors. In other words, the individual classifiers should be diverse among themselves. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. The diversity is increased because the individual classifiers will perform the same task (classification of the same input patterns) but they will be built using different subsets of patterns and/or attributes. The majority of the papers using feature selection for ensembles address the homogenous structures of ensemble, i.e., ensembles composed only of the same type of classifiers. In this investigation, two approaches of genetic algorithms (single and multi-objective) will be used to guide the distribution of the features among the classifiers in the context of homogenous and heterogeneous ensembles. The experiments will be divided into two phases that use a filter approach of feature selection guided by genetic algorithm
Resumo:
The course of Algorithms and Programming reveals as real obstacle for many students during the computer courses. The students not familiar with new ways of thinking required by the courses as well as not having certain skills required for this, encounter difficulties that sometimes result in the repetition and dropout. Faced with this problem, that survey on the problems experienced by students was conducted as a way to understand the problem and to guide solutions in trying to solve or assuage the difficulties experienced by students. In this paper a methodology to be applied in a classroom based on the concepts of Meaningful Learning of David Ausubel was described. In addition to this theory, a tool developed at UFRN, named Takkou, was used with the intent to better motivate students in algorithms classes and to exercise logical reasoning. Finally a comparative evaluation of the suggested methodology and traditional methodology was carried out, and results were discussed
Resumo:
In development of Synthetic Agents for Education, the doubt still resides about what would be a behavior that could be considered, in fact, plausible for this agent's type, which can be considered as effective on the transmission of the knowledge by the agent and the function of emotions this process. The purpose of this labor has an investigative nature in an attempt to discover what aspects are important for this behavior consistent and practical development of a chatterbot with the function of virtual tutor, within the context of learning algorithms. In this study, we explained the agents' basics, Intelligent Tutoring Systems, bots, chatterbots and how these systems need to provide credibility to report on their behavior. Models of emotions, personality and humor to computational agents are also covered, as well as previous studies by other researchers at the area. After that, the prototype is detailed, the research conducted, a summary of results achieved, the architectural model of the system, vision of computing and macro view of the features implemented.
Uma análise experimental de algoritmos exatos aplicados ao problema da árvore geradora multiobjetivo
Resumo:
The Multiobjective Spanning Tree Problem is NP-hard and models applications in several areas. This research presents an experimental analysis of different strategies used in the literature to develop exact algorithms to solve the problem. Initially, the algorithms are classified according to the approaches used to solve the problem. Features of two or more approaches can be found in some of those algorithms. The approaches investigated here are: the two-stage method, branch-and-bound, k-best and the preference-based approach. The main contribution of this research lies in the fact that no research was presented to date reporting a systematic experimental analysis of exact algorithms for the Multiobjective Spanning Tree Problem. Therefore, this work can be a basis for other research that deal with the same problem. The computational experiments compare the performance of algorithms regarding processing time, efficiency based on the number of objectives and number of solutions found in a controlled time interval. The analysis of the algorithms was performed for known instances of the problem, as well as instances obtained from a generator commonly used in the literature
Resumo:
Nonogram is a logical puzzle whose associated decision problem is NP-complete. It has applications in pattern recognition problems and data compression, among others. The puzzle consists in determining an assignment of colors to pixels distributed in a N M matrix that satisfies line and column constraints. A Nonogram is encoded by a vector whose elements specify the number of pixels in each row and column of a figure without specifying their coordinates. This work presents exact and heuristic approaches to solve Nonograms. The depth first search was one of the chosen exact approaches because it is a typical example of brute search algorithm that is easy to implement. Another implemented exact approach was based on the Las Vegas algorithm, so that we intend to investigate whether the randomness introduce by the Las Vegas-based algorithm would be an advantage over the depth first search. The Nonogram is also transformed into a Constraint Satisfaction Problem. Three heuristics approaches are proposed: a Tabu Search and two memetic algorithms. A new function to calculate the objective function is proposed. The approaches are applied on 234 instances, the size of the instances ranging from 5 x 5 to 100 x 100 size, and including logical and random Nonograms
Resumo:
The Scientific Algorithms are a new metaheuristics inspired in the scientific research process. The new method introduces the idea of theme to search the solution space of hard problems. The inspiration for this class of algorithms comes from the act of researching that comprises thinking, knowledge sharing and disclosing new ideas. The ideas of the new method are illustrated in the Traveling Salesman Problem. A computational experiment applies the proposed approach to a new variant of the Traveling Salesman Problem named Car Renter Salesman Problem. The results are compared to state-of-the-art algorithms for the latter problem
Resumo:
Symbolic Data Analysis (SDA) main aims to provide tools for reducing large databases to extract knowledge and provide techniques to describe the unit of such data in complex units, as such, interval or histogram. The objective of this work is to extend classical clustering methods for symbolic interval data based on interval-based distance. The main advantage of using an interval-based distance for interval-based data lies on the fact that it preserves the underlying imprecision on intervals which is usually lost when real-valued distances are applied. This work includes an approach allow existing indices to be adapted to interval context. The proposed methods with interval-based distances are compared with distances punctual existing literature through experiments with simulated data and real data interval
Resumo:
Este trabalho apresenta um algoritmo transgenético híbrido para a solução de um Problema de Configuração de uma Rede de Distribuição de Gás Natural. O problema da configuração dessas redes requer a definição de um traçado por onde os dutos devem ser colocados para atender aos clientes. É estudada neste trabalho uma maneira de conectar os clientes em uma rede com arquitetura em forma de árvore. O objetivo é minimizar o custo de construção da rede, mesmo que para isso alguns clientes que não proporcionam lucros deixem de ser atendidos. Esse problema pode ser formulado computacionalmente através do Problema de Steiner com Prêmios. Este é um problema de otimização combinatória da classe dos NPÁrduos. Este trabalho apresenta um algoritmo heurístico para a solução do problema. A abordagem utilizada é chamada de Algoritmos Transgenéticos, que se enquadram na categoria dos algoritmos evolucionários. Para a geração de soluções inicias é utilizado um algoritmo primaldual, e pathrelinking é usado como intensificador
Resumo:
The use of Multiple Input Multiple Output (MIMO) systems has permitted the recent evolution of wireless communication standards. The Spatial Multiplexing MIMO technique, in particular, provides a linear gain at the transmission capacity with the minimum between the numbers of transmit and receive antennas. To obtain a near capacity performance in SM-MIMO systems a soft decision Maximum A Posteriori Probability MIMO detector is necessary. However, such detector is too complex for practical solutions. Hence, the goal of a MIMO detector algorithm aimed for implementation is to get a good approximation of the ideal detector while keeping an acceptable complexity. Moreover, the algorithm needs to be mapped to a VLSI architecture with small area and high data rate. Since Spatial Multiplexing is a recent technique, it is argued that there is still much room for development of related algorithms and architectures. Therefore, this thesis focused on the study of sub optimum algorithms and VLSI architectures for broadband MIMO detector with soft decision. As a result, novel algorithms have been developed starting from proposals of optimizations for already established algorithms. Based on these results, new MIMO detector architectures with configurable modulation and competitive area, performance and data rate parameters are here proposed. The developed algorithms have been extensively simulated and the architectures were synthesized so that the results can serve as a reference for other works in the area
Resumo:
Opuntia fícus - indica (L.) Mill is a cactacea presents in the Caatinga ecosystem and shows in its chemical c omposition flavonoids, galacturonic acid and sugars. Different hydroglicolic (EHG001 and EHG002) and hydroethanolic subsequently lyophilized (EHE001 and EHE002) extracts were developed. The EHE002 had his preliminary phytochemical composition investigated by thin layer chromatography (TLC) and we observed the predominance of flavonoids. Different formulations were prepared as emulsions with Sodium Polyacrylate (and) Hydrogenated Polydecene (and) Trideceth - 6 (Rapithix® A60), and Polyacrylamide (and) C13 - 14 I soparaffin (and) Laureth - 7 (Sepigel® 305), and gel with Sodium Polyacrylate (Rapithix® A100). The sensorial evaluation was conducted by check - all - that - apply method. There were no significant differences between the scores assigned to the formulations, howe ver, we noted a preference for those formulated with 1,5% of Rapithix® A100 and 3,0% of Sepigel® 305. These and the formulation with 3% Rapithix® A60 were tested for preliminary and accelerated stability. In accelerated stability study, samples were stored at different temperatures for 90 days. Organoleptic characteristics, the pH values and rheological behavior were assessed. T he emulsions formulated with 3,0% of Sepigel® 305 and 1,5% of Rapithix® A60 w ere stable with pseudoplastic and thixotropic behavior . The moisturizing clinical efficacy of the emulsions containing 3,0% of Sepigel® 305 containing 1 and 3% of EHG001 was performed using the capacitance method (Corneometer®) and transepidermal water lost – TEWL evaluation ( Tewameter®). The results showed t hat the formulation with 3% of EHG001 increased the skin moisturizing against the vehicle and the extractor solvent formulation after five hours. The formulations containing 1 and 3% of EHG001 increased skin barrier effect by reducing transepidermal water loss up to four hours after application.
Resumo:
Técnicas de otimização conhecidas como as metaheurísticas tem conseguido resolversatisfatoriamente problemas conhecidos, mas desenvolvimento das metaheurísticas écaracterizado por escolha de parâmetros para sua execução, na qual a opção apropriadadestes parâmetros (valores). Onde o ajuste de parâmetro é essencial testa-se os parâmetrosaté que resultados viáveis sejam obtidos, normalmente feita pelo desenvolvedor que estaimplementando a metaheuristica. A qualidade dos resultados de uma instância1 de testenão será transferida para outras instâncias a serem testadas e seu feedback pode requererum processo lento de “tentativa e erro” onde o algoritmo têm que ser ajustado para umaaplicação especifica. Diante deste contexto das metaheurísticas surgiu a Busca Reativaque defende a integração entre o aprendizado de máquina dentro de buscas heurísticaspara solucionar problemas de otimização complexos. A partir da integração que a BuscaReativa propõe entre o aprendizado de máquina e as metaheurísticas, surgiu a ideia dese colocar a Aprendizagem por Reforço mais especificamente o algoritmo Q-learning deforma reativa, para selecionar qual busca local é a mais indicada em determinado instanteda busca, para suceder uma outra busca local que não pode mais melhorar a soluçãocorrente na metaheurística VNS. Assim, neste trabalho propomos uma implementação reativa,utilizando aprendizado por reforço para o auto-tuning do algoritmo implementado,aplicado ao problema do caixeiro viajante simétrico e ao problema escalonamento sondaspara manutenção de poços.
Resumo:
This anthropological research has as main goal to grasp the meanings and perceptions - mode of subjectivity - of crack users in relation to the proposals of Therapeutic Communities (TC) of religious character. The work emphasizes the analysis of Therapeutic Communities of Rio Grande do Norte state, studying a particular organization, called Anzóis da Dor. I intend to analyze qualitative data, focusing on an analysis of the discursive content of speeches and the observation of social interaction, which results in an ethnographic text characterized by a dense description. In relation to the dissertation’s specific goals, we seek to present a general overview of the emergence and development of Therapeutic Communities – encompassing general and local considerations - and pointing out to the dynamics of religious healing systems of these institutions, besides the principles that guide them. In methodological terms, I conducted the partial mapping of therapeutic communities located in Rio Grande do Norte state; interviews with Therapeutic Communities coordinators, visits, participant observation in one of these institutions as well as some interviews with crack users and people close to these social agents in relation to the Therapeutic Communities and the treatment offered by them