855 resultados para Evolutionary optimization methods
Resumo:
We present a generic method/model for multi-objective design optimization of laminated composite components, based on vector evaluated particle swarm optimization (VEPSO) algorithm. VEPSO is a novel, co-evolutionary multi-objective variant of the popular particle swarm optimization algorithm (PSO). In the current work a modified version of VEPSO algorithm for discrete variables has been developed and implemented successfully for the, multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; failure mechanism based failure criteria, Maximum stress failure criteria and the Tsai-Wu failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work addresses the optimum design of a composite box-beam structure subject to strength constraints. Such box-beams are used as the main load carrying members of helicopter rotor blades. A computationally efficient analytical model for box-beam is used. Optimal ply orientation angles are sought which maximize the failure margins with respect to the applied loading. The Tsai-Wu-Hahn failure criterion is used to calculate the reserve factor for each wall and ply and the minimum reserve factor is maximized. Ply angles are used as design variables and various cases of initial starting design and loadings are investigated. Both gradient-based and particle swarm optimization (PSO) methods are used. It is found that the optimization approach leads to the design of a box-beam with greatly improved reserve factors which can be useful for helicopter rotor structures. While the PSO yields globally best designs, the gradient-based method can also be used with appropriate starting designs to obtain useful designs efficiently. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This paper aims at evaluating the methods of multiclass support vector machines (SVMs) for effective use in distance relay coordination. Also, it describes a strategy of supportive systems to aid the conventional protection philosophy in combating situations where protection systems have maloperated and/or information is missing and provide selective and secure coordinations. SVMs have considerable potential as zone classifiers of distance relay coordination. This typically requires a multiclass SVM classifier to effectively analyze/build the underlying concept between reach of different zones and the apparent impedance trajectory during fault. Several methods have been proposed for multiclass classification where typically several binary SVM classifiers are combined together. Some authors have extended binary SVM classification to one-step single optimization operation considering all classes at once. In this paper, one-step multiclass classification, one-against-all, and one-against-one multiclass methods are compared for their performance with respect to accuracy, number of iterations, number of support vectors, training, and testing time. The performance analysis of these three methods is presented on three data sets belonging to training and testing patterns of three supportive systems for a region and part of a network, which is an equivalent 526-bus system of the practical Indian Western grid.
Resumo:
In this paper, a novel genetic algorithm is developed by generating artificial chromosomes with probability control to solve the machine scheduling problems. Generating artificial chromosomes for Genetic Algorithm (ACGA) is closely related to Evolutionary Algorithms Based on Probabilistic Models (EAPM). The artificial chromosomes are generated by a probability model that extracts the gene information from current population. ACGA is considered as a hybrid algorithm because both the conventional genetic operators and a probability model are integrated. The ACGA proposed in this paper, further employs the ``evaporation concept'' applied in Ant Colony Optimization (ACO) to solve the permutation flowshop problem. The ``evaporation concept'' is used to reduce the effect of past experience and to explore new alternative solutions. In this paper, we propose three different methods for the probability of evaporation. This probability of evaporation is applied as soon as a job is assigned to a position in the permutation flowshop problem. Experimental results show that our ACGA with the evaporation concept gives better performance than some algorithms in the literature.
Resumo:
It is well known that in the time-domain acquisition of NMR data, signal-to-noise (S/N) improves as the square root of the number of transients accumulated. However, the amplitude of the measured signal varies during the time of detection, having a functional form dependent on the coherence detected. Matching the time spent signal averaging to the expected amplitude of the signal observed should also improve the detected signal-to-noise. Following this reasoning, Barna et al. (J Magn. Reson.75, 384, 1987) demonstrated the utility of exponential sampling in one- and two-dimensional NMR, using maximum-entropy methods to analyze the data. It is proposed here that for two-dimensional experiments the exponential sampling be replaced by exponential averaging. The data thus collected can be analyzed by standard fast-Fourier-transform routines. We demonstrate the utility of exponential averaging in 2D NOESY spectra of the protein ubiquitin, in which an enhanced SIN is observed. It is also shown that the method acquires delayed double-quantum-filtered COSY without phase distortion.
Resumo:
Abundant quantities of fly ash have been produced by thermal power plants situated ail over the world. Many applications of fly ash depend upon its pozzolanic reactivity. This reactivity depends upon many factors, including lime content. Many fly ashes show marked improvement with the addition of lime. However, for every fly ash, there is an optimum lime content for its maximum reactivity. There is no well-established simple test to determine the optimum lime content. In this paper an attempt is made to use a simple physical and physico chemical test to determine the optimum lime content. The principle behind the use of a pH test, liquid limit test, and free swell index test to determine the optimum lime content has been explained. All the methods predict nearly the same optimum lime content and correlate well with that determined by the strength test.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.
Resumo:
In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations.
Resumo:
This paper investigates a new Glowworm Swarm Optimization (GSO) clustering algorithm for hierarchical splitting and merging of automatic multi-spectral satellite image classification (land cover mapping problem). Amongst the multiple benefits and uses of remote sensing, one of the most important has been its use in solving the problem of land cover mapping. Image classification forms the core of the solution to the land cover mapping problem. No single classifier can prove to classify all the basic land cover classes of an urban region in a satisfactory manner. In unsupervised classification methods, the automatic generation of clusters to classify a huge database is not exploited to their full potential. The proposed methodology searches for the best possible number of clusters and its center using Glowworm Swarm Optimization (GSO). Using these clusters, we classify by merging based on parametric method (k-means technique). The performance of the proposed unsupervised classification technique is evaluated for Landsat 7 thematic mapper image. Results are evaluated in terms of the classification efficiency - individual, average and overall.
Resumo:
Stirred tank bioreactors, employed in the production of a variety of biologically active chemicals, are often operated in batch, fed-batch, and continuous modes of operation. The optimal design of bioreactor is dependent on the kinetics of the biological process, as well as the performance criteria (yield, productivity, etc.) under consideration. In this paper, a general framework is proposed for addressing the two key issues related to the optimal design of a bioreactor, namely, (i) choice of the best operating mode and (ii) the corresponding flow rate trajectories. The optimal bioreactor design problem is formulated with initial conditions and inlet and outlet flow rate trajectories as decision variables to maximize more than one performance criteria (yield, productivity, etc.) as objective functions. A computational methodology based on genetic algorithm approach is developed to solve this challenging multiobjective optimization problem with multiple decision variables. The applicability of the algorithm is illustrated by solving two challenging problems from the bioreactor optimization literature.
Resumo:
Present day power systems are growing in size and complexity of operation with inter connections to neighboring systems, introduction of large generating units, EHV 400/765 kV AC transmission systems, HVDC systems and more sophisticated control devices such as FACTS. For planning and operational studies, it requires suitable modeling of all components in the power system, as the number of HVDC systems and FACTS devices of different type are incorporated in the system. This paper presents reactive power optimization with three objectives to minimize the sum of the squares of the voltage deviations (ve) of the load buses, minimization of sum of squares of voltage stability L-indices of load buses (¿L2), and also the system real power loss (Ploss) minimization. The proposed methods have been tested on typical sample system. Results for Indian 96-bus equivalent system including HVDC terminal and UPFC under normal and contingency conditions are presented.
Resumo:
Background: India has the third largest HIV-1 epidemic with 2.4 million infected individuals. Molecular epidemiological analysis has identified the predominance of HIV-1 subtype C (HIV-1C). However, the previous reports have been limited by sample size, and uneven geographical distribution. The introduction of HIV-1C in India remains uncertain due to this lack of structured studies. To fill the gap, we characterised the distribution pattern of HIV-1 subtypes in India based on data collection from nationwide clinical cohorts between 2007 and 2011. We also reconstructed the time to the most recent common ancestor (tMRCA) of the predominant HIV-1C strains. Methodology/Principal Findings: Blood samples were collected from 168 HIV-1 seropositive subjects from 7 different states. HIV-1 subtypes were determined using two or three genes, gag, pol, and env using several methods. Bayesian coalescent-based approach was used to reconstruct the time of introduction and population growth patterns of the Indian HIV-1C. For the first time, a high prevalence (10%) of unique recombinant forms (BC and A1C) was observed when two or three genes were used instead of one gene (p<0.01; p = 0.02, respectively). The tMRCA of Indian HIV-1C was estimated using the three viral genes, ranged from 1967 (gag) to 1974 (env). Pol-gene analysis was considered to provide the most reliable estimate 1971, (95% CI: 1965-1976)]. The population growth pattern revealed an initial slow growth phase in the mid-1970s, an exponential phase through the 1980s, and a stationary phase since the early 1990s. Conclusions/Significance: The Indian HIV-1C epidemic originated around 40 years ago from a single or few genetically related African lineages, and since then largely evolved independently. The effective population size in the country has been broadly stable since the 1990s. The evolving viral epidemic, as indicated by the increase of recombinant strains, warrants a need for continued molecular surveillance to guide efficient disease intervention strategies.
Resumo:
In recent times computational algorithms inspired by biological processes and evolution are gaining much popularity for solving science and engineering problems. These algorithms are broadly classified into evolutionary computation and swarm intelligence algorithms, which are derived based on the analogy of natural evolution and biological activities. These include genetic algorithms, genetic programming, differential evolution, particle swarm optimization, ant colony optimization, artificial neural networks, etc. The algorithms being random-search techniques, use some heuristics to guide the search towards optimal solution and speed-up the convergence to obtain the global optimal solutions. The bio-inspired methods have several attractive features and advantages compared to conventional optimization solvers. They also facilitate the advantage of simulation and optimization environment simultaneously to solve hard-to-define (in simple expressions), real-world problems. These biologically inspired methods have provided novel ways of problem-solving for practical problems in traffic routing, networking, games, industry, robotics, economics, mechanical, chemical, electrical, civil, water resources and others fields. This article discusses the key features and development of bio-inspired computational algorithms, and their scope for application in science and engineering fields.
Resumo:
Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]