985 resultados para Analisi limite muratura algoritmo collasso sismica
Resumo:
This paper aims to propose a hybrid meta-heuristics for the Heterogeneous Fleet Vehicle Routing Problem (HVRP), which is a combinatorial optimization problem NP-hard, and is characterized by the use of a limited fleet consists of different vehicles with different capacities. The hybrid method developed makes use of a memetic algorithm associated with the component optimizer Vocabulary Building. The resulting hybrid meta-heuristic was implemented in the programming language C + + and computational experiments generated good results in relation to meta-heuristic applied in isolation, proving the efficiency of the proposed method.
Resumo:
The Combinatorial Optimization is a basic area to companies who look for competitive advantages in the diverse productive sectors and the Assimetric Travelling Salesman Problem, which one classifies as one of the most important problems of this area, for being a problem of the NP-hard class and for possessing diverse practical applications, has increased interest of researchers in the development of metaheuristics each more efficient to assist in its resolution, as it is the case of Memetic Algorithms, which is a evolutionary algorithms that it is used of the genetic operation in combination with a local search procedure. This work explores the technique of Viral Infection in one Memetic Algorithms where the infection substitutes the mutation operator for obtaining a fast evolution or extinguishing of species (KANOH et al, 1996) providing a form of acceleration and improvement of the solution . For this it developed four variants of Viral Infection applied in the Memetic Algorithms for resolution of the Assimetric Travelling Salesman Problem where the agent and the virus pass for a symbiosis process which favored the attainment of a hybrid evolutionary algorithms and computational viable
Resumo:
Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
On this paper, it is made a comparative analysis among a controller fuzzy coupled to a PID neural adjusted by an AGwith several traditional control techniques, all of them applied in a system of tanks (I model of 2nd order non lineal). With the objective of making possible the techniques involved in the comparative analysis and to validate the control to be compared, simulations were accomplished of some control techniques (conventional PID adjusted by GA, Neural PID (PIDN) adjusted by GA, Fuzzy PI, two Fuzzy attached to a PID Neural adjusted by GA and Fuzzy MISO (3 inputs) attached to a PIDN adjusted by GA) to have some comparative effects with the considered controller. After doing, all the tests, some control structures were elected from all the tested techniques on the simulating stage (conventional PID adjusted by GA, Fuzzy PI, two Fuzzy attached to a PIDN adjusted by GA and Fuzzy MISO (3 inputs) attached to a PIDN adjusted by GA), to be implemented at the real system of tanks. These two kinds of operation, both the simulated and the real, were very important to achieve a solid basement in order to establish the comparisons and the possible validations show by the results
Resumo:
This work has as main objective to show all the particularities regarding the Three-phase Power Summation Method, used for load flow calculation, in what it says respect to the influence of the magnetic coupling among the phases, as well as to the losses presented in all the existent transformers in the feeder to be analyzed. Besides, its application is detailed in the study of the short-circuits, that happen in the presence of high impedance values, which possess a problem, that is its difficult detection and consequent elimination on the part of common devices of protection. That happens due to the characteristic presented by the current of short¬ circuit, in being generally of the same order of greatness that the load currents. Results of simulations accomplished in several situations will be shown, objectifying a complete analysis of the behavior of the proposed method in several types of short-circuits. Confront of the results obtained by the method with results of another works will be presented to verify its effectiveness
Resumo:
ln this work the implementation of the SOM (Self Organizing Maps) algorithm or Kohonen neural network is presented in the form of hierarchical structures, applied to the compression of images. The main objective of this approach is to develop an Hierarchical SOM algorithm with static structure and another one with dynamic structure to generate codebooks (books of codes) in the process of the image Vector Quantization (VQ), reducing the time of processing and obtaining a good rate of compression of images with a minimum degradation of the quality in relation to the original image. Both self-organizing neural networks developed here, were denominated HSOM, for static case, and DHSOM, for the dynamic case. ln the first form, the hierarchical structure is previously defined and in the later this structure grows in an automatic way in agreement with heuristic rules that explore the data of the training group without use of external parameters. For the network, the heuristic mIes determine the dynamics of growth, the pruning of ramifications criteria, the flexibility and the size of children maps. The LBO (Linde-Buzo-Oray) algorithm or K-means, one ofthe more used algorithms to develop codebook for Vector Quantization, was used together with the algorithm of Kohonen in its basic form, that is, not hierarchical, as a reference to compare the performance of the algorithms here proposed. A performance analysis between the two hierarchical structures is also accomplished in this work. The efficiency of the proposed processing is verified by the reduction in the complexity computational compared to the traditional algorithms, as well as, through the quantitative analysis of the images reconstructed in function of the parameters: (PSNR) peak signal-to-noise ratio and (MSE) medium squared error
Resumo:
Pipeline leak detection is a matter of great interest for companies who transport petroleum and its derivatives, in face of rising exigencies of environmental policies in industrialized and industrializing countries. However, existing technologies are not yet fully consolidated and many studies have been accomplished in order to achieve better levels of sensitivity and reliability for pipeline leak detection in a wide range of flowing conditions. In this sense, this study presents the results obtained from frequency spectrum analysis of pressure signals from pipelines in several flowing conditions like normal flowing, leakages, pump switching, etc. The results show that is possible to distinguish between the frequency spectra of those different flowing conditions, allowing recognition and announce of liquid pipeline leakages from pressure monitoring. Based upon these results, a pipeline leak detection algorithm employing frequency analysis of pressure signals is proposed, along with a methodology for its tuning and calibration. The proposed algorithm and its tuning methodology are evaluated with data obtained from real leakages accomplished in pipelines transferring crude oil and water, in order to evaluate its sensitivity, reliability and applicability to different flowing conditions
Resumo:
Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations
Resumo:
This work develops a methodology for defining the maximum active power being injected into predefined nodes in the studied distribution networks, considering the possibility of multiple accesses of generating units. The definition of these maximum values is obtained from an optimization study, in which further losses should not exceed those of the base case, i.e., without the presence of distributed generation. The restrictions on the loading of the branches and voltages of the system are respected. To face the problem it is proposed an algorithm, which is based on the numerical method called particle swarm optimization, applied to the study of AC conventional load flow and optimal load flow for maximizing the penetration of distributed generation. Alternatively, the Newton-Raphson method was incorporated to resolution of the load flow. The computer program is performed with the SCILAB software. The proposed algorithm is tested with the data from the IEEE network with 14 nodes and from another network, this one from the Rio Grande do Norte State, at a high voltage (69 kV), with 25 nodes. The algorithm defines allowed values of nominal active power of distributed generation, in percentage terms relative to the demand of the network, from reference values
Resumo:
This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables
Resumo:
This work proposes a collaborative system for marking dangerous points in the transport routes and generation of alerts to drivers. It consisted of a proximity warning system for a danger point that is fed by the driver via a mobile device equipped with GPS. The system will consolidate data provided by several different drivers and generate a set of points common to be used in the warning system. Although the application is designed to protect drivers, the data generated by it can serve as inputs for the responsible to improve signage and recovery of public roads
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better
Resumo:
Dois dos principais objetivos no estudo da aerodinâmica de veículos é a segurança e o desempenho. Esses objetivos podem ser alcançados através do desenvolvimento de dispositivos que modifiquem o escoamento de ar ao redor do veículo e também através de alterações da forma das superfícies externas. O perfil mais baixo da frente do veículo, por exemplo, tem grande influência sobre o coeficiente de penetração aerodinâmica (Cx) e posteriormente sobre grande parte da distribuição de pressão ao longo da superfície do veículo. O objetivo deste trabalho foi de analisar o comportamento aerodinâmico que incide sobre 2 (dois) tipos de carrocerias de veículos, baseado em experimentação em túnel aerodinâmico, visando determinar a eficiência aerodinâmica de cada uma delas. Foram comparados dois modelos de veículos, sendo um deles com configuração familiar, muito comum nas grandes cidades e outro modelo com configuração esportiva mais voltado para o desempenho, ambos com escala 1:18. Para a realização do experimento foi construído um túnel aerodinâmico do tipo soprador com circuito aberto, onde foram analisadas as distribuições de pressão nas superfícies inferiores e superiores da carroceria dos modelos e o deslocamento da camada-limite. Tomando-se por base toda a teórica deste trabalho, com relação à aerodinâmica de carroceria, considerou-se nesta dissertação que essa filosofia de pesquisa é capaz de gerar reflexos positivos, que podem tornar os veículos cada vez mais eficientes e influenciar positivamente nos níveis de segurança e desempenho