990 resultados para evolution algorithm
Resumo:
Voltage and current waveforms of a distribution or transmission power system are not pure sinusoids. There are distortions in these waveforms that can be represented as a combination of the fundamental frequency, harmonics and high frequency transients. This paper presents a novel approach to identifying harmonics in power system distorted waveforms. The proposed method is based on Genetic Algorithms, which is an optimization technique inspired by genetics and natural evolution. GOOAL, a specially designed intelligent algorithm for optimization problems, was successfully implemented and tested. Two kinds of representations concerning chromosomes are utilized: binary and real. The results show that the proposed method is more precise than the traditional Fourier Transform, especially considering the real representation of the chromosomes.
Resumo:
In this article a novel algorithm based on the chemotaxis process of Echerichia coil is developed to solve multiobjective optimization problems. The algorithm uses fast nondominated sorting procedure, communication between the colony members and a simple chemotactical strategy to change the bacterial positions in order to explore the search space to find several optimal solutions. The proposed algorithm is validated using 11 benchmark problems and implementing three different performance measures to compare its performance with the NSGA-II genetic algorithm and with the particle swarm-based algorithm NSPSO. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.
A hybrid Particle Swarm Optimization - Simplex algorithm (PSOS) for structural damage identification
Resumo:
This study proposes a new PSOS-model based damage identification procedure using frequency domain data. The formulation of the objective function for the minimization problem is based on the Frequency Response Functions (FRFs) of the system. A novel strategy for the control of the Particle Swarm Optimization (PSO) parameters based on the Nelder-Mead algorithm (Simplex method) is presented; consequently, the convergence of the PSOS becomes independent of the heuristic constants and its stability and confidence are enhanced. The formulated hybrid method performs better in different benchmark functions than the Simulated Annealing (SA) and the basic PSO (PSO(b)). Two damage identification problems, taking into consideration the effects of noisy and incomplete data, were studied: first, a 10-bar truss and second, a cracked free-free beam, both modeled with finite elements. In these cases, the damage location and extent were successfully determined. Finally, a non-linear oscillator (Duffing oscillator) was identified by PSOS providing good results. (C) 2009 Elsevier Ltd. All rights reserved
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper presents a free software tool that supports the next-generation Mobile Communications, through the automatic generation of models of components and electronic devices based on neural networks. This tool enables the creation, training, validation and simulation of the model directly from measurements made on devices of interest, using an interface totally oriented to non-experts in neural models. The resulting model can be exported automatically to a traditional circuit simulator to test different scenarios.
Resumo:
This paper presents a new methodology to estimate unbalanced harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The problem solving algorithm herein proposed makes use of data from various power quality meters, which can either be synchronized by high technology GPS devices or by using information from a fundamental frequency load flow, what makes the overall power quality monitoring system much less costly. The ES based harmonic estimation model is applied to a 14 bus network to compare its performance to a conventional Monte Carlo approach. It is also applied to a 50 bus subtransmission network in order to compare the three-phase and single-phase approaches as well as the robustness of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement.
Resumo:
Following the approach developed for rods in Part 1 of this paper (Pimenta et al. in Comput. Mech. 42:715-732, 2008), this work presents a fully conserving algorithm for the integration of the equations of motion in nonlinear shell dynamics. We begin with a re-parameterization of the rotation field in terms of the so-called Rodrigues rotation vector, allowing for an extremely simple update of the rotational variables within the scheme. The weak form is constructed via non-orthogonal projection, the time-collocation of which ensures exact conservation of momentum and total energy in the absence of external forces. Appealing is the fact that general hyperelastic materials (and not only materials with quadratic potentials) are permitted in a totally consistent way. Spatial discretization is performed using the finite element method and the robust performance of the scheme is demonstrated by means of numerical examples.
Resumo:
A fully conserving algorithm is developed in this paper for the integration of the equations of motion in nonlinear rod dynamics. The starting point is a re-parameterization of the rotation field in terms of the so-called Rodrigues rotation vector, which results in an extremely simple update of the rotational variables. The weak form is constructed with a non-orthogonal projection corresponding to the application of the virtual power theorem. Together with an appropriate time-collocation, it ensures exact conservation of momentum and total energy in the absence of external forces. Appealing is the fact that nonlinear hyperelastic materials (and not only materials with quadratic potentials) are permitted without any prejudice on the conservation properties. Spatial discretization is performed via the finite element method and the performance of the scheme is assessed by means of several numerical simulations.
Resumo:
In this work, the applicability of a new algorithm for the estimation of mechanical properties from instrumented indentation data was studied for thin films. The applicability was analyzed with the aid of both three-dimensional finite element simulations and experimental indentation tests. The numerical approach allowed studying the effect of the substrate on the estimation of mechanical properties of the film, which was conducted based on the ratio h(max)/l between maximum indentation depth and film thickness. For the experimental analysis, indentation tests were conducted on AISI H13 tool steel specimens, plasma nitrated and coated with TiN thin films. Results have indicated that, for the conditions analyzed in this work, the elastic deformation of the substrate limited the extraction of mechanical properties of the film/substrate system. This limitation occurred even at low h(max)/l ratios and especially for the estimation of the values of yield strength and strain hardening exponent. At indentation depths lower than 4% of the film thickness, the proposed algorithm estimated the mechanical properties of the film with accuracy. Particularly for hardness, precise values were estimated at h(max)/l lower than 0.1, i.e. 10% of film thickness. (C) 2010 Published by Elsevier B.V.
Resumo:
One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on electrical potential measurements at its boundary generated by an imposed electrical current distribution into the boundary. One of the methods used in dynamic estimation is the Kalman filter. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, poor tracking ability of the extended Kalman filter (EKF) is achieved. An analytically developed evolution model is not feasible at this moment. The paper investigates the identification of the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model transition matrix is identified using the history of the estimated resistivity distribution obtained by a sensitivity matrix based algorithm and a Newton-Raphson algorithm. To numerically identify the linear evolution model, the Ibrahim time-domain method is used. The investigation is performed by numerical simulations of a domain with time-varying resistivity and by experimental data collected from the boundary of a human chest during normal breathing. The obtained dynamic resistivity values lie within the expected values for the tissues of a human chest. The EKF results suggest that the tracking ability is significantly improved with this approach.
Resumo:
Since the 1990s several large companies have been publishing nonfinancial performance reports. Focusing initially on the physical environment, these reports evolved to consider social relations, as well as data on the firm`s economic performance. A few mining companies pioneered this trend, and in the last years some of them incorporated the three dimensions of sustainable development, publishing so-called sustainability reports. This article reviews 31 reports published between 2001 and 2006 by four major mining companies. A set of 62 assessment items organized in six categories (namely context and commitment, management, environmental, social and economic performance, and accessibility and assurance) were selected to guide the review. The items were derived from international literature and recommended best practices, including the Global Reporting Initiative G3 framework. A content analysis was performed using the report as a sampling unit, and using phrases, graphics, or tables containing certain information as data collection units. A basic rating scale (0 or 1) was used for noting the presence or absence of information and a final percentage score was obtained for each report. Results show that there is a clear evolution in report`s comprehensiveness and depth. Categories ""accessibility and assurance"" and ""economic performance"" featured the lowest scores and do not present a clear evolution trend in the period, whereas categories ""context and commitment"" and ""social performance"" presented the best results and regular improvement; the category ""environmental performance,"" despite it not reaching the biggest scores, also featured constant evolution. Description of data measurement techniques, besides more comprehensive third-party verification are the items most in need of improvement.
Resumo:
Ni-doped SnO(2) nanoparticles, promising for gas-sensing applications, have been synthesized by a polymer precursor method. X-ray diffraction (XRD) and transmission electron microscopy (TEM) data analyses indicate the exclusive formation of nanosized particles with rutile-type phase (tetragonal SnO(2)) for Ni contents below 10 mol%. The mean crystallite size shows a progressive reduction with the Ni content. Room-temperature Raman spectra of Ni-doped SnO(2) nanoparticles show the presence of Raman active modes and modes activated by size effects. From the evolution of the A(1g) mode with the Ni content, a solubility limit at similar to 2 mol% was estimated. Below that content, Raman results are consistent with the occurrence of solid solution (ss) and surface segregation (seg.) of Ni ions. Above similar to 2 mol% Ni, the redshift of A(1g) mode suggests that the surface segregation of Ni ions takes place. Disorder-activated bands were determined and their integrated intensity evolution with the Ni content suggest that the solid-solution regime favors the increase of disorder; meanwhile, that disorder becomes weaker as the Ni content is increased. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Medium carbon steels are mostly used for simple applications; however, new applications have been developed for which good sheet metal formability is required. These types of steels have an inherent low formability. A medium-carbon hot-rolled SAE 1050 steel was selected for this study. It has been cold rolled with thickness reductions varying between 7 and 80%. The samples obtained were used to evaluate the strain hardening curve. For samples with a 50 and 80% thickness reduction, an annealing heat treatment was performed to achieve recrystallization. The material was characterized in the ""as-received"", cold rolled and annealed conditions using several methods: optical metallography, X-ray diffraction (texture), Vickers hardness, and tensile testing. For large thickness reductions, the SAE 1050 steel presented low elongation, less than 2%, and yield strength (YS) and tensile strength (TS) around 1400 MPa. Texture in the ""as-received"" condition showed strong components on the {001} plane, in the < 100 >, < 210 > and (110) directions. After cold rolling, the texture did not present any significant changes for small thickness reductions, however. It changed completely for large ones, where gamma, < 111 >//ND, alpha, < 110 > HRD, and gamma prime, < 223 >//ND, fibres were strengthened. After annealing, the microstructure of the SAE 1050 steel was characterized by recrystallized ferrite and globular cementite. There was little change in the alpha fibre for the 50% reduction, whereas for the 80% reduction, its intensity increased. Both gamma and gamma prime fibres vanished upon annealing for 50 and 80% reductions alike. (c) 2008 Elsevier B.V. All rights reserved.