1000 resultados para algoritmos de confiabilidade
Resumo:
El documento de esta tesis por compendio de publicaciones se divide en dos partes: la síntesis donde se resume la fundamentación, resultados y conclusiones de esta tesis, y las propias publicaciones en su formato original, que se incluyen como apéndices. Dado que existen acuerdo de confidencialidad (véase "Derechos" más adelante) que impiden su publicación en formato electrónico de forma pública y abierta (como es el repositorio de la UA), y acorde con lo que se dictamina en el punto 6 del artículo 14 del RD 99/2011, de 28 de enero, no se incluyen estos apéndices en el documento electrónico que se presenta en cedé, pero se incluyen las referencias completas y sí se incluyen integramente en el ejemplar encuadernado. Si el CEDIP y el RUA así lo decidiesen más adelante, podría modificarse este documento electrónico para incluir los enlaces a los artículos originales.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware.
Resumo:
Técnicas de otimização conhecidas como as metaheurísticas tem conseguido resolversatisfatoriamente problemas conhecidos, mas desenvolvimento das metaheurísticas écaracterizado por escolha de parâmetros para sua execução, na qual a opção apropriadadestes parâmetros (valores). Onde o ajuste de parâmetro é essencial testa-se os parâmetrosaté que resultados viáveis sejam obtidos, normalmente feita pelo desenvolvedor que estaimplementando a metaheuristica. A qualidade dos resultados de uma instância1 de testenão será transferida para outras instâncias a serem testadas e seu feedback pode requererum processo lento de “tentativa e erro” onde o algoritmo têm que ser ajustado para umaaplicação especifica. Diante deste contexto das metaheurísticas surgiu a Busca Reativaque defende a integração entre o aprendizado de máquina dentro de buscas heurísticaspara solucionar problemas de otimização complexos. A partir da integração que a BuscaReativa propõe entre o aprendizado de máquina e as metaheurísticas, surgiu a ideia dese colocar a Aprendizagem por Reforço mais especificamente o algoritmo Q-learning deforma reativa, para selecionar qual busca local é a mais indicada em determinado instanteda busca, para suceder uma outra busca local que não pode mais melhorar a soluçãocorrente na metaheurística VNS. Assim, neste trabalho propomos uma implementação reativa,utilizando aprendizado por reforço para o auto-tuning do algoritmo implementado,aplicado ao problema do caixeiro viajante simétrico e ao problema escalonamento sondaspara manutenção de poços.
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments
Resumo:
We present indefinite integration algorithms for rational functions over subfields of the complex numbers, through an algebraic approach. We study the local algorithm of Bernoulli and rational algorithms for the class of functions in concern, namely, the algorithms of Hermite; Horowitz-Ostrogradsky; Rothstein-Trager and Lazard-Rioboo-Trager. We also study the algorithm of Rioboo for conversion of logarithms involving complex extensions into real arctangent functions, when these logarithms arise from the integration of rational functions with real coefficients. We conclude presenting pseudocodes and codes for implementation in the software Maxima concerning the algorithms studied in this work, as well as to algorithms for polynomial gcd computation; partial fraction decomposition; squarefree factorization; subresultant computation, among other side algorithms for the work. We also present the algorithm of Zeilberger-Almkvist for integration of hyperexpontential functions, as well as its pseudocode and code for Maxima. As an alternative for the algorithms of Rothstein-Trager and Lazard-Rioboo-Trager, we yet present a code for Benoulli’s algorithm for square-free denominators; and another for Czichowski’s algorithm, although this one is not studied in detail in the present work, due to the theoretical basis necessary to understand it, which is beyond this work’s scope. Several examples are provided in order to illustrate the working of the integration algorithms in this text
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.
Resumo:
The performance of algorithms for fault location i n transmission lines is directly related to the accuracy of its input data. Thus, fa ctors such as errors in the line parameters, failures in synchronization of oscillographic recor ds and errors in measurements of voltage and current can significantly influence the accurac y of algorithms that use bad data to indicate the fault location. This work presents a new method ology for fault location in transmission lines based on the theory of state estimation in or der to determine the location of faults more accurately by considering realistic systematic erro rs that may be present in measurements of voltage and current. The methodology was implemente d in two stages: pre-fault and post- fault. In the first step, assuming non-synchronized data, the synchronization angle and positive sequence line parameters are estimated, an d in the second, the fault distance is estimated. Besides calculating the most likely faul t distance obtained from measurement errors, the variance associated with the distance f ound is also determined, using the errors theory. This is one of the main contributions of th is work, since, with the proposed algorithm, it is possible to determine a most likely zone of f ault incidence, with approximately 95,45% of confidence. Tests for evaluation and validation of the proposed algorithm were realized from actual records of faults and from simulations of fictitious transmission systems using ATP software. The obtained results are relevant to show that the proposed estimation approach works even adopting realistic variances, c ompatible with real equipments errors.
Resumo:
The great amount of data generated as the result of the automation and process supervision in industry implies in two problems: a big demand of storage in discs and the difficulty in streaming this data through a telecommunications link. The lossy data compression algorithms were born in the 90’s with the goal of solving these problems and, by consequence, industries started to use those algorithms in industrial supervision systems to compress data in real time. These algorithms were projected to eliminate redundant and undesired information in a efficient and simple way. However, those algorithms parameters must be set for each process variable, becoming impracticable to configure this parameters for each variable in case of systems that monitor thousands of them. In that context, this paper propose the algorithm Adaptive Swinging Door Trending that consists in a adaptation of the Swinging Door Trending, as this main parameters are adjusted dynamically by the analysis of the signal tendencies in real time. It’s also proposed a comparative analysis of performance in lossy data compression algorithms applied on time series process variables and dynamometer cards. The algorithms used to compare were the piecewise linear and the transforms.
Resumo:
The need of the oil industry to ensure the safety of the facilities, employees and the environment, not to mention the search for maximum efficiency of its facilities, makes it seeks to achieve a high level of excellence in all stages of its production processes in order to obtain the required quality of the final product. Know the reliability of equipment and what it stands for a system is of fundamental importance for ensuring the operational safety. The reliability analysis technique has been increasingly applied in the oil industry as fault prediction tool and undesirable events that can affect business continuity. It is an applied scientific methodology that involves knowledge in engineering and statistics to meet and or analyze the performance of components, equipment and systems in order to ensure that they perform their function without fail, for a period of time and under a specific condition. The results of reliability analyzes help in making decisions about the best maintenance strategy of petrochemical plants. Reliability analysis was applied on equipment (bike-centrifugal fan) between the period 2010-2014 at the Polo Petrobras Guamaré Industrial, situated in rural Guamaré municipality in the state of Rio Grande do Norte, where he collected data field, analyzed historical equipment and observing the behavior of faults and their impacts. The data were processed in commercial software reliability ReliaSoft BlockSim 9. The results were compared with a study conducted by the experts in the field in order to get the best maintenance strategy for the studied system. With the results obtained from the reliability analysis tools was possible to determine the availability of the centrifugal motor-fan and what will be its impact on the security of process units if it will fail. A new maintenance strategy was established to improve the reliability, availability, maintainability and decreased likelihood of Moto-Centrifugal Fan failures, it is a series of actions to promote the increased system reliability and consequent increase in cycle life of the asset. Thus, this strategy sets out preventive measures to reduce the probability of failure and mitigating aimed at minimizing the consequences.
Resumo:
Wireless Communication is a trend in the industrial environment nowadays and on this trend, we can highlight the WirelessHART technology. In this situation, it is natural the search for new improvements in the technology and such improvements can be related directly to the routing and scheduling algorithms. In the present thesis, we present a literature review about the main specific solutions for Routing and scheduling for WirelessHART. The thesis also proposes a new scheduling algorithm called Flow Scheduling that intends to improve superframe utilization and flexibility aspects. For validation purposes, we develop a simulation module for the Network Simulator 3 (NS-3) that models aspects like positioning, signal attenuation and energy consumption and provides an link individual error configuration. The module also allows the creation of the scheduling superframe using the Flow and Han Algorithms. In order to validate the new algorithms, we execute a series of comparative tests and evaluate the algorithms performance for link allocation, delay and superframe occupation. In order to validate the physical layer of the simulation module, we statically configure the routing and scheduling aspects and perform reliability and energy consumption tests using various literature topologies and error probabilities.
Resumo:
Wireless Communication is a trend in the industrial environment nowadays and on this trend, we can highlight the WirelessHART technology. In this situation, it is natural the search for new improvements in the technology and such improvements can be related directly to the routing and scheduling algorithms. In the present thesis, we present a literature review about the main specific solutions for Routing and scheduling for WirelessHART. The thesis also proposes a new scheduling algorithm called Flow Scheduling that intends to improve superframe utilization and flexibility aspects. For validation purposes, we develop a simulation module for the Network Simulator 3 (NS-3) that models aspects like positioning, signal attenuation and energy consumption and provides an link individual error configuration. The module also allows the creation of the scheduling superframe using the Flow and Han Algorithms. In order to validate the new algorithms, we execute a series of comparative tests and evaluate the algorithms performance for link allocation, delay and superframe occupation. In order to validate the physical layer of the simulation module, we statically configure the routing and scheduling aspects and perform reliability and energy consumption tests using various literature topologies and error probabilities.
Resumo:
The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.
Resumo:
The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.
Resumo:
Cryptography is the main form to obtain security in any network. Even in networks with great energy consumption restrictions, processing and memory limitations, as the Wireless Sensors Networks (WSN), this is no different. Aiming to improve the cryptography performance, security and the lifetime of these networks, we propose a new cryptographic algorithm developed through the Genetic Programming (GP) techniques. For the development of the cryptographic algorithm’s fitness criteria, established by the genetic GP, nine new cryptographic algorithms were tested: AES, Blowfish, DES, RC6, Skipjack, Twofish, T-DES, XTEA and XXTEA. Starting from these tests, fitness functions was build taking into account the execution time, occupied memory space, maximum deviation, irregular deviation and correlation coefficient. After obtaining the genetic GP, the CRYSEED and CRYSEED2 was created, algorithms for the 8-bits devices, optimized for WSNs, i.e., with low complexity, few memory consumption and good security for sensing and instrumentation applications.