989 resultados para Algorithm Comparison
Resumo:
In this article a novel algorithm based on the chemotaxis process of Echerichia coil is developed to solve multiobjective optimization problems. The algorithm uses fast nondominated sorting procedure, communication between the colony members and a simple chemotactical strategy to change the bacterial positions in order to explore the search space to find several optimal solutions. The proposed algorithm is validated using 11 benchmark problems and implementing three different performance measures to compare its performance with the NSGA-II genetic algorithm and with the particle swarm-based algorithm NSPSO. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.
A hybrid Particle Swarm Optimization - Simplex algorithm (PSOS) for structural damage identification
Resumo:
This study proposes a new PSOS-model based damage identification procedure using frequency domain data. The formulation of the objective function for the minimization problem is based on the Frequency Response Functions (FRFs) of the system. A novel strategy for the control of the Particle Swarm Optimization (PSO) parameters based on the Nelder-Mead algorithm (Simplex method) is presented; consequently, the convergence of the PSOS becomes independent of the heuristic constants and its stability and confidence are enhanced. The formulated hybrid method performs better in different benchmark functions than the Simulated Annealing (SA) and the basic PSO (PSO(b)). Two damage identification problems, taking into consideration the effects of noisy and incomplete data, were studied: first, a 10-bar truss and second, a cracked free-free beam, both modeled with finite elements. In these cases, the damage location and extent were successfully determined. Finally, a non-linear oscillator (Duffing oscillator) was identified by PSOS providing good results. (C) 2009 Elsevier Ltd. All rights reserved
Resumo:
Determining reference concentrations in rivers and streams is an important tool for environmental management. Reference conditions for eutrophication-related water variables are unavailable for Brazilian freshwaters. We aimed to establish reference baselines for So Paulo State tropical rivers and streams for total phosphorus (TP) and nitrogen (TN), nitrogen-ammonia (NH(4) (+)) and Biochemical Oxygen Demand (BOD) through the best professional judgment and the trisection methods. Data from 319 sites monitored by the So Paulo State Environmental Company (2005 to 2009) and from the 22 Water Resources Management Units in So Paulo State were assessed (N = 27,131). We verified that data from different management units dominated by similar land cover could be analyzed together (Analysis of Variance, P = 0.504). Cumulative frequency diagrams showed that industrialized management units were characterized by the worst water quality (e.g. average TP of 0.51 mg/L), followed by agricultural watersheds. TN and NH(4) (+) were associated with urban percentages and population density (Spearman Rank Correlation Test, P < 0.05). Best professional judgment and trisection (median of lower third of all sites) methods for determining reference concentrations showed agreement: 0.03 & 0.04 mg/L (TP), 0.31 & 0.34 mg/L (TN), 0.06 & 0.10 mg-N/L (NH(4) (+)) and 2 & 2 mg/L (BOD), respectively. Our reference concentrations were similar to TP and TN reference values proposed for temperate water bodies. These baselines can help with water management in So Paulo State, as well as providing some of the first such information for tropical ecosystems.
Resumo:
This paper investigates the validity of a simplified equivalent reservoir representation of a multi-reservoir hydroelectric system for modelling its optimal operation for power maximization. This simplification, proposed by Arvanitidis and Rosing (IEEE Trans Power Appar Syst 89(2):319-325, 1970), imputes a potential energy equivalent reservoir with energy inflows and outflows. The hydroelectric system is also modelled for power maximization considering individual reservoir characteristics without simplifications. Both optimization models employed MINOS package for solution of the non-linear programming problems. A comparison between total optimized power generation over the planning horizon by the two methods shows that the equivalent reservoir is capable of producing satisfactory power estimates with less than 6% underestimation. The generation and total reservoir storage trajectories along the planning horizon obtained by equivalent reservoir method, however, presented significant discrepancies as compared to those found in the detailed modelling. This study is motivated by the fact that Brazilian generation system operations are based on the equivalent reservoir method as part of the power dispatch procedures. The potential energy equivalent reservoir is an alternative which eliminates problems with the dimensionality of state variables in a dynamic programming model.
Resumo:
Neodymium doped and undoped aluminum oxide samples were obtained using two different techniques: Pechini and sol-gel. Fine grained powders were produced using both procedures, which were analyzed using Scanning Electron Microscopy (SEM) and Thermo-Stimulated Luminescence (TSL). Results showed that neodymium ions incorporation is responsible for the creation of two new TSL peaks (125 and 265 degrees C) and, also, for the enhancement of the intrinsic TSL peak at 190 degrees C. An explanation was proposed for these observations. SEM gave the dimensions of the clusters produced by each method, showing that those obtained by Pechini are smaller than the ones produced by sol-gel; it can also explain the higher emission supplied by the first one. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The compositions of canola, soybean, corn, cottonseed and sunflower oils suggest that they exhibit substantially different propensity for oxidation following the order of Canola < corn < cottonseed < sunflower approximate to soybean. These data suggest that any of the vegetable oils evaluated could be blended with minimal impact on viscosity although compositional differences would surely affect oxidative stability. Cooling curve analysis showed that similar cooling profiles were obtained for different vegetable oils. Interestingly, no film boiling or transition nucleate boiling was observed with any of the vegetable oils and heat transfer occurs only by pure nucleate boiling and convection. High-temperature cooling properties of vegetable oils are considerable faster than those observed for petroleum oil-based quenchants. (C)2010 Journal of Mechanical Engineering. All rights reserved.
Resumo:
Four different architectural acrylic paint formulations were tested by exposure to weathering for 7 years in the urban site of Sao Paulo and the coastal site of Ubatuba, South-East Brazil. Surface discolorations and detachment of coatings were assessed and the components of the biofilms were identified by standard microbiological methods. The painted surfaces of the mortar panels were much more discolored in Ubatuba, where major components of the biofilms were the cyanobacteria Gloeocapsa and Scytonema. In two of the four paint films, a pink coloration on the surface at this coastal site, caused mainly by red-pigmented Gloeocapsa, produced high discoloration ratings, but low degradation (as measured by detachment). Biofilms in Sao Paulo contained the same range of phototrophs, but in lesser quantity. However, fungal numbers, as determined by plating, were higher. Detachment ratings in this urban site were only slightly lower than in Ubatuba. The matt paint performed worst of the four, with silk and semi-gloss finishes giving lowest biodeterioration ratings. The matt elastomeric paint performed well at both sites, apart from becoming almost 100% covered by the pink biofilm in Ubatuba. Unpainted mortar panels became intensely discolored with a black biofilm, showing that all the paints had achieved one of their objectives, that of surface protection of the substrate. The value of PVC (pigment volume content) as an indicator of coatings biosusceptibility, is questioned. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper presents a free software tool that supports the next-generation Mobile Communications, through the automatic generation of models of components and electronic devices based on neural networks. This tool enables the creation, training, validation and simulation of the model directly from measurements made on devices of interest, using an interface totally oriented to non-experts in neural models. The resulting model can be exported automatically to a traditional circuit simulator to test different scenarios.
Resumo:
This paper presents a new methodology to estimate unbalanced harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The problem solving algorithm herein proposed makes use of data from various power quality meters, which can either be synchronized by high technology GPS devices or by using information from a fundamental frequency load flow, what makes the overall power quality monitoring system much less costly. The ES based harmonic estimation model is applied to a 14 bus network to compare its performance to a conventional Monte Carlo approach. It is also applied to a 50 bus subtransmission network in order to compare the three-phase and single-phase approaches as well as the robustness of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement.
Resumo:
This paper compares the behaviour of two different control structures of automatic voltage regulators of synchronous machines equipped with static excitation systems. These systems have a fully controlled thyristor bridge that supplies DC current to the rotor winding. The rectifier bridge is fed by the stator terminals through a step-down transformer. The first control structure, named ""Direct Control"", has a single proportional-integral (PI) regulator that compares stator voltage setpoint with measured voltage and acts directly on the thyristor bridge`s firing angle. This control structure is usually employed in commercial excitation systems for hydrogenerators. The second structure, named ""Cascade Control"", was inspired on control loops of commercial DC motor drives. Such drives employ two PIs in a cascade arrangement, the external PI deals with the motor speed while the internal one regulates the armature current. In the adaptation proposed, the external PI compares setpoint with the actual stator voltage and produces the setpoint to the internal PI-loop which controls the field current.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.