42 resultados para Genetic algorithm
Resumo:
A genetic algorithm (GA) was adopted to optimise the response of a composite laminate subject to impact. Two different impact scenarios are presented: low-velocity impact of a slender laminated strip and high-velocity impact of a rectangular plate by a spherical impactor. In these cases, the GA's objective was to, respectively, minimise the peak deflection and minimise penetration by varying the ply angles.
The GA was coupled to a commercial finite-element (FE) package LS DYNA to perform the impact analyses. A comparison with a commercial optimisation package, LS OPT, was also made. The results showed that the GA was a robust, capable optimisation tool that produced near optimal designs, and performed well with respect to LS OPT for the more complex high-velocity impact scenario tested.
Resumo:
Abstract To achieve higher flexibility and to better satisfy actual customer requirements, there is an increasing tendency to develop and deliver software in an incremental fashion. In adopting this process, requirements are delivered in releases and so a decision has to be made on which requirements should be delivered in which release. Three main considerations that need to be taken account of are the technical precedences inherent in the requirements, the typically conflicting priorities as determined by the representative stakeholders, as well as the balance between required and available effort. The technical precedence constraints relate to situations where one requirement cannot be implemented until another is completed or where one requirement is implemented in the same increment as another one. Stakeholder preferences may be based on the perceived value or urgency of delivered requirements to the different stakeholders involved. The technical priorities and individual stakeholder priorities may be in conflict and difficult to reconcile. This paper provides (i) a method for optimally allocating requirements to increments; (ii) a means of assessing and optimizing the degree to which the ordering conflicts with stakeholder priorities within technical precedence constraints; (iii) a means of balancing required and available resources for all increments; and (iv) an overall method called EVOLVE aimed at the continuous planning of incremental software development. The optimization method used is iterative and essentially based on a genetic algorithm. A set of the most promising candidate solutions is generated to support the final decision. The paper evaluates the proposed approach using a sample project.
Resumo:
Polymer extrusion is a complex process and the availability of good dynamic models is key for improved system operation. Previous modelling attempts have failed adequately to capture the non-linearities of the process or prove too complex for control applications. This work presents a novel approach to the problem by the modelling of extrusion viscosity and pressure, adopting a grey box modelling technique that combines mechanistic knowledge with empirical data using a genetic algorithm approach. The models are shown to outperform those of a much higher order generated by a conventional black box technique while providing insight into the underlying processes at work within the extruder.
Resumo:
We have studied the optical spectra of a sample of 31 O- and early B-type stars in the Small Magellanic Cloud, 21 of which are associated with the young massive cluster NGC 346. Stellar parameters are determined using an automated fitting method (Mokiem et al. 2005, A&A, 441, 711), which combines the stellar atmosphere code FASTWIND (Puls et al. 2005, A&A, 435, 669) with the genetic algorithm based optimisation routine PIKAIA (Charbonneau 1995, ApJS, 101, 309). Comparison with predictions of stellar evolution that account for stellar rotation does not result in a unique age, though most stars are best represented by an age of 1-3 Myr. The automated method allows for a detailed determination of the projected rotational velocities. The present day v(r) sin i distribution of the 21 dwarf stars in our sample is consistent with an underlying rotational velocity (v(r)) distribution that can be characterised by a mean velocity of about 160-190 km s(-1) and an effective half width of 100-150 km s(-1). The vr distribution must include a small percentage of slowly rotating stars. If predictions of the time evolution of the equatorial velocity for massive stars within the environment of the SMC are correct (Maeder & Meynet 2001, A&A, 373, 555), the young age of the cluster implies that this underlying distribution is representative for the initial rotational velocity distribution. The location in the Hertzsprung-Russell diagram of the stars showing helium enrichment is in qualitative agreement with evolutionary tracks accounting for rotation, but not for those ignoring vr. The mass loss rates of the SMC objects having luminosities of log L-star/L-circle dot greater than or similar to 5.4 are in excellent agreement with predictions by Vink et al. (2001, A&A, 369, 574). However, for lower luminosity stars the winds are too weak to determine. M accurately from the optical spectrum. Three targets were classified as Vz stars, two of which are located close to the theoretical zero-age main sequence. Three lower luminosity targets that were not classified as Vz stars are also found to lie near the ZAMS. We argue that this is related to a temperature effect inhibiting cooler from displaying the spectral features required for the Vz luminosity class.
Resumo:
Modelling and control of nonlinear dynamical systems is a challenging problem since the dynamics of such systems change over their parameter space. Conventional methodologies for designing nonlinear control laws, such as gain scheduling, are effective because the designer partitions the overall complex control into a number of simpler sub-tasks. This paper describes a new genetic algorithm based method for the design of a modular neural network (MNN) control architecture that learns such partitions of an overall complex control task. Here a chromosome represents both the structure and parameters of an individual neural network in the MNN controller and a hierarchical fuzzy approach is used to select the chromosomes required to accomplish a given control task. This new strategy is applied to the end-point tracking of a single-link flexible manipulator modelled from experimental data. Results show that the MNN controller is simple to design and produces superior performance compared to a single neural network (SNN) controller which is theoretically capable of achieving the desired trajectory. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Nurse rostering is a difficult search problem with many constraints. In the literature, a number of approaches have been investigated including penalty function methods to tackle these constraints within genetic algorithm frameworks. In this paper, we investigate an extension of a previously proposed stochastic ranking method, which has demonstrated superior performance to other constraint handling techniques when tested against a set of constrained optimisation benchmark problems. An initial experiment on nurse rostering problems demonstrates that the stochastic ranking method is better in finding feasible solutions but fails to obtain good results with regard to the objective function. To improve the performance of the algorithm, we hybridise it with a recently proposed simulated annealing hyper-heuristic within a local search and genetic algorithm framework. The hybrid algorithm shows significant improvement over both the genetic algorithm with stochastic ranking and the simulated annealing hyper-heuristic alone. The hybrid algorithm also considerably outperforms the methods in the literature which have the previously best known results.
Resumo:
We have studied the optical spectra of a sample of 28 O- and early B-type stars in the Large Magellanic Cloud, 22 of which are associated with the young star forming region N11. Our observations sample the central associations of LH9 and LH10, and the surrounding regions. Stellar parameters are determined using an automated fitting method ( Mokiem et al. 2005), which combines the stellar atmosphere code fastwind ( Puls et al. 2005) with the genetic algorithm based optimisation routine PIKAIA ( Charbonneau 1995). We derive an age of 7.0 +/- 1.0 and 3.0 +/- 1.0 Myr for LH9 and LH10, respectively. The age difference and relative distance of the associations are consistent with a sequential star formation scenario in which stellar activity in LH9 triggered the formation of LH10. Our sample contains four stars of spectral type O2. From helium and hydrogen line fitting we find the hottest three of these stars to be similar to 49- 54 kK ( compared to similar to 45- 46 kK for O3 stars). Detailed determination of the helium mass fraction reveals that the masses of helium enriched dwarfs and giants derived in our spectroscopic analysis are systematically lower than those implied by non-rotating evolutionary tracks. We interpret this as evidence for efficient rotationally enhanced mixing leading to the surfacing of primary helium and to an increase of the stellar luminosity. This result is consistent with findings for SMC stars by Mokiem et al. ( 2006). For bright giants and supergiants no such mass discrepancy is found; these stars therefore appear to follow tracks of modestly or non-rotating objects. The set of programme stars was sufficiently large to establish the mass loss rates of OB stars in this Z similar to 1/2 Z(circle dot) environment sufficiently accurate to allow for a quantitative comparison with similar objects in the Galaxy and the SMC. The mass loss properties are found to be intermediate to massive stars in the Galaxy and SMC. Comparing the derived modified wind momenta D-mom as a function of luminosity with predictions for LMC metallicities by Vink et al. ( 2001) yields good agreement in the entire luminosity range that was investigated, i.e. 5.0
Resumo:
Cold-formed steel portal frames are a popular form of construction for low-rise commercial, light industrial and agricultural buildings with spans of up to 20 m. In this article, a real-coded genetic algorithm is described that is used to minimize the cost of the main frame of such buildings. The key decision variables considered in this proposed algorithm consist of both the spacing and pitch of the frame as continuous variables, as well as the discrete section sizes.A routine taking the structural analysis and frame design for cold-formed steel sections is embedded into a genetic algorithm. The results show that the real-coded genetic algorithm handles effectively the mixture of design variables, with high robustness and consistency in achieving the optimum solution. All wind load combinations according to Australian code are considered in this research. Results for frames with knee braces are also included, for which the optimization achieved even larger savings in cost.
Resumo:
Numerous studies have shown that postbuckling stiffened panels may undergo abrupt changes in buckled mode
shape when loaded in uniaxial compression. This phenomenon is often referred to as a mode jump or secondary
instability. The resulting sudden release of stored energy may initiate damage in vulnerable regions within a
structure, for example, at the skin-stiffener interface of a stiffened composite panel. Current design practice is to
remove a mode jump by increasing the skin thickness of the postbuckling region. A layup optimization methodology,
based on a genetic algorithm, is presented, which delays the onset of secondary instabilities in a composite structure
while maintaining a constant weight and subject to a number of design constraints. A finite element model was
developed of a stiffened panel’s skin bay, which exhibited secondary instabilities. An automated numerical routine
extracted information directly from the finite element displacement results to detect the onset of initial buckling and
secondary instabilities. This routine was linked to the genetic algorithm to find a revised layup for the skin bay, within
appropriate design constraints, to delay the onset of secondary instabilities. The layup optimization methodology,
resulted in a panel that had a higher buckling load, prebuckling stiffness, and secondary instability load than the
baseline design.
Resumo:
Composite materials are finding increasing use on primary aerostructures to meet demanding performance targets while reducing environmental impact. This paper presents a finite-element-based preliminary optimization methodology for postbuckling stiffened panels, which takes into account damage mechanisms that lead to delamination and subsequent failure by stiffener debonding. A global-local modeling approach is adopted in which the boundary conditions on the local model are extracted directly from the global model. The optimization procedure is based on a genetic algorithm that maximizes damage resistance within the postbuckling regime. This routine is linked to a finite element package and the iterative procedure automated. For a given loading condition, the procedure optimized the stacking sequence of several areas of the panel, leading to an evolved panel that displayed superior damage resistance in comparison with nonoptimized designs.
Resumo:
Experimental and numerical studies have shown that the occurrence of abrupt secondary instabilities, or mode-jumps, in a postbuckling stiffened composite panel may initiate structural failure. This study presents an optimisation methodology, using a genetic algorithm and finite element analysis for the lay-up optimisation of postbuckling composite plates to delay the onset of mode-jump instabilities. A simple and novel approach for detecting modejumps is proposed, based on the RMS value of out-of-plane pseudo-velocities at a number of locations distributed over the postbuckling structure
Resumo:
The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.
Resumo:
The design of hot-rolled steel portal frames can be sensitive to serviceability deflection limits. In such cases, in order to reduce frame deflections, practitioners increase the size of the eaves haunch and / or the sizes of the steel sections used for the column and rafter members of the frame. This paper investigates the effect of such deflection limits using a real-coded niching genetic algorithm (RC-NGA) that optimizes frame weight, taking into account both ultimate as well as serviceability limit states. The results show that the proposed GA is efficient and reliable. Two different sets of serviceability deflection limits are then considered: deflection limits recommended by the Steel Construction Institute (SCI), which is based on control of differential deflections, and other deflection limits based on suggestions by industry. Parametric studies are carried out on frames with spans ranging between 15 m to 50 m and column heights between 5 m to 10 m. It is demonstrated that for a 50 m span frame, use of the SCI recommended deflection limits can lead to frame weights that are around twice as heavy as compared to designs without these limits.
Resumo:
Mathematical modelling has become an essential tool in the design of modern catalytic systems. Emissions legislation is becoming increasingly stringent, and so mathematical models of aftertreatment systems must become more accurate in order to provide confidence that a catalyst will convert pollutants over the required range of conditions.
Automotive catalytic converter models contain several sub-models that represent processes such as mass and heat transfer, and the rates at which the reactions proceed on the surface of the precious metal. Of these sub-models, the prediction of the surface reaction rates is by far the most challenging due to the complexity of the reaction system and the large number of gas species involved. The reaction rate sub-model uses global reaction kinetics to describe the surface reaction rate of the gas species and is based on the Langmuir Hinshelwood equation further developed by Voltz et al. [1] The reactions can be modelled using the pre-exponential and activation energies of the Arrhenius equations and the inhibition terms.
The reaction kinetic parameters of aftertreatment models are found from experimental data, where a measured light-off curve is compared against a predicted curve produced by a mathematical model. The kinetic parameters are usually manually tuned to minimize the error between the measured and predicted data. This process is most commonly long, laborious and prone to misinterpretation due to the large number of parameters and the risk of multiple sets of parameters giving acceptable fits. Moreover, the number of coefficients increases greatly with the number of reactions. Therefore, with the growing number of reactions, the task of manually tuning the coefficients is becoming increasingly challenging.
In the presented work, the authors have developed and implemented a multi-objective genetic algorithm to automatically optimize reaction parameters in AxiSuite®, [2] a commercial aftertreatment model. The genetic algorithm was developed and expanded from the code presented by Michalewicz et al. [3] and was linked to AxiSuite using the Simulink add-on for Matlab.
The default kinetic values stored within the AxiSuite model were used to generate a series of light-off curves under rich conditions for a number of gas species, including CO, NO, C3H8 and C3H6. These light-off curves were used to generate an objective function.
This objective function was used to generate a measure of fit for the kinetic parameters. The multi-objective genetic algorithm was subsequently used to search between specified limits to attempt to match the objective function. In total the pre-exponential factors and activation energies of ten reactions were simultaneously optimized.
The results reported here demonstrate that, given accurate experimental data, the optimization algorithm is successful and robust in defining the correct kinetic parameters of a global kinetic model describing aftertreatment processes.
Resumo:
Various scientific studies have explored the causes of violent behaviour from different perspectives, with psychological tests, in particular, applied to the analysis of crime factors. The relationship between bi-factors has also been extensively studied including the link between age and crime. In reality, many factors interact to contribute to criminal behaviour and as such there is a need to have a greater level of insight into its complex nature. In this article we analyse violent crime information systems containing data on psychological, environmental and genetic factors. Our approach combines elements of rough set theory with fuzzy logic and particle swarm optimisation to yield an algorithm and methodology that can effectively extract multi-knowledge from information systems. The experimental results show that our approach outperforms alternative genetic algorithm and dynamic reduct-based techniques for reduct identification and has the added advantage of identifying multiple reducts and hence multi-knowledge (rules). Identified rules are consistent with classical statistical analysis of violent crime data and also reveal new insights into the interaction between several factors. As such, the results are helpful in improving our understanding of the factors contributing to violent crime and in highlighting the existence of hidden and intangible relationships between crime factors.