849 resultados para Real genetic algorithm
Resumo:
A large number of catastrophic accidents were aroused by the instability and destruction of anti-dip rock masses in the worldwide engineering projects, such as hydropower station, mine, railways and so on. Problems in relation to deformation and failure about anti-dip rock slopes are significant for engineering geology research. This dissertation takes the Longpan slope in the Jinsha River as a case to study the deformation mechanism of large-scale anti-dip rock masses and the slope stability analysis method. The primary conclusions are as follows. The Dale Reach of Jinsha River, from Longpan to the debouchment of Chongjiang tributary, is located in the southeastern margin of the Qinghai-Tibet Plateau. Longpan slope is the right embankment of Dale dam, it is only 26 km to the Shigu and 18 km to Tiger Leaping Gorge. The areal geology tectonic structures here area are complicated and blurry. Base on the information of geophysical exploration (CSAMT and seismology) and engineering geological investigation, the perdue tectonic pattern of Dale Reach is put forward for the first time in this paper. Due to the reverse slip of Longpan fault and normal left-rotation of Baihanchang fault, the old faulted valley came into being. The thick riverbed sediments have layered characters of different components and corresponding causes, which attribute to the sedimentary environments according with the new tectonic movements such as periodic mountain uplifting in middle Pleistocene. Longpan slope consists of anti-dip alternate sandstone and slate stratums, and the deformable volume is 6.5×107m3 approximately. It was taken for an ancient landslide or toppling failure in the past so that Dale dam became a vexed question. Through the latest field surveying, displacement monitoring and rock masses deforming characters analyses, the geological mechanism is actually a deep-seated gravitational bending deformation. And then the discrete element method is used to simulate the deforming evolution process, the conclusion accords very well with the geo-mechanical patterns analyses. In addition strength reduction method based on DEM is introduced to evaluate the factor of safety of anti-dip rock slope, and in accordance with the expansion way of the shear yielding zones, the progressive shear failure mechanism of large-scale anti-dip rock masses is proposed for the first time. As an embankment or a close reservoir bank to the lower dam, the stability of Longpan slope especially whether or not resulting in sliding with high velocity and activating water waves is a key question for engineering design. In fact it is difficult to decide the unified slip surface of anti-dip rock slope for traditional methods. The author takes the shear yielding zones acquired form the discrete element strength reduction calculation as the potential sliding surface and then evaluates the change of excess pore pressure and factor of stability of the slope generated by rapid drawdown of ponded water. At the same time the dynamic response of the slope under seismic loading is simulated through DEM numerical modeling, the following results are obtained. Firstly the effective effect of seismic inertia force is resulting in accumulation of shear stresses. Secondly the discontinuous structures are crucial to wave transmission. Thirdly the ultimate dynamic response of slope system takes place at the initial period of seismic loading. Lastly but essentially the effect of earthquake load to bringing on deformation and failure of rock slope is the coupling effect of shear stresses and excess pore water pressure accumulation. In view of limitations in searching the critical slip surface of rock slope of the existing domestic and international software for limit equilibrium slope stability analyses, this article proposes a new method named GA-Sarma Algorithm for rock slope stability analyses. Just as its name implies, GA-Sarma Algorithm bases on Genetic Algorithm and Sarma method. GA-Sarma Algorithm assumes the morphology of slip surface to be a broken line with traceability to extend along the discontinuous surface structures, and the slice boundaries is consistent with rock mass discontinuities such as rock layers, faults, cracks, and so on. GA-Sarma Algorithm is revolutionary method that is suitable for global optimization of the critical slip surface for rock slopes. The topics and contents including in this dissertation are closely related to the difficulties in practice, the main conclusions have been authorized by the engineering design institute. The research work is very meaningful and useful for the engineering construction of Longpan hydropower station.
Resumo:
The receiver function method applied in researching the discontinuities in upper mantle was systematically studied in this paper. Using the theoretical receiver functions, the characteristics of P410S and P660S phases were analyzed, and the influencing factors for detection of these phases were discussed. The stability of receiver function was studied, and a new computational method of receiver function, RFSSMS (Receiver Function of Stack and Smooth of Multi seismic-records at a Single station), was put forward. We built initial reference velocity model for the media beneath each of 18 seismic stations respectively; then estimated the buried depths of 410-km and 660-km discontinuities(simply marked as '410' and '660') under the stations by using the arrive time differences of P410S and P660S with P. We developed a new receiver function inversion method -PGARFI (Peeling-Genetic Algorithm of Receiver Function Inversion), to obtain the whole crust and upper mantle velocity structure and the depths of discontinuities beneath a station. The major works and results could be summarized as follows: (1) By analysis of the theoretical receiver functions with different velocity models and different ray parameters, we obtain the knowledge: The amplitudes of P410S and P660S phases are decreasing with the increasing of epicentral distance A , and the arrival time differences of these phases with P are shorter as A is longer. The multiple refracted and/or reflected waves yielded on Moho and the discontinuities in the crust interfere the identification of P410S. If existing LVZ under the lithosphere, some multiple waves caused by LVZ will interfere the identification of P410S. The multiple waves produced by discontinuity lied near 120km depth will mix with P410s phase in some range of epicentral distance; and the multiple waves concerned with the discontinuity lied near 210km depth will interfere the identification of P660S. The epicentral distance for P4i0s identification is limited, the upper limit is 80° . The identification of P660S is not restricted by the epicenter distance obviously. The identification of P410S and P6gos in the theoretical receiver functions is interfered weakly from the seismic wave attenuation caused by the media absorption if the Q value in a reasonable range. (2) The stability of receiver function was studied by using synthetic seismograms with different kind of noise. The results show that on the condition of high signal-noise-ratio of seismic records, the high frequency background noise and the low frequency microseism noise do not influence the calculating result of receiver function. But the media "scattering noise" influence the stability of receiver function. When the scattering effect reach some level, the identification of P4iOs and P66os is difficult in single receiver function which is yielded from only one seismic record. We provided a new method to calculate receiver function, that is, with a group of earthquake records, stacking the R and Z components respectively in the frequency domain, and weighted smooth the stacked Z component, then compute the complex spectrum ratio of R to Z. This method can improve the stability of receiver function and protrude the P4i0s and P66os in the receiver function curves. (3) 263 receiver functions were provided from 1364 three component broadband seismograms recorded at 18 stations in China and adjacent areas for the tele-earthquakes. The observed arrival time differences of P410S and P660S with P were obtained in these receiver functions. The initial velocity model for every station was built according to the prior research results. The buried depths of '410' and '660' under a station were acquired by the way of adjusting the depths of these two discontinuities in the initial velocity model until the theoretical arrival time differences of P410S and P660S with P well conformed to the observed. The results show an obvious lateral heterogeneity of buried depths of ' 410' and (660' . The depth of '410' is shallower beneath BJI, XAN, LZH and ENH, but deeper under QIZ and CHTO, and the average is 403km . The average depth of '660' is 663km, deeper under MDJ and MAJO, but shallower under QIZ and HYB. (4) For inversing the whole crust and upper mantle velocity structure, a new inversion method -PGARFI (Peeling-Genetic Algorithm of Receiver Function Inversion) has- been developed here. The media beneath a station is divided into segments, then the velocity structure is inversed from receiver function from surface to deep successively. Using PGARFI, the multi reflection / refraction phases of shallower discontinuities are isolated from the first order refraction transform phase of deep discontinuity. The genetic algorithm with floating-point coding was used hi the inversion of every segment, and arithmetical crossover and non-uniform mutation technologies were employed in the genetic optimization. 10 independent inversions are completed for every segment, and 50 most excellent velocity models are selected according to the priority of fitness from all models produced in the inversion process. The final velocity structure of every segment is obtained from the weighted average of these 50 models. Before inversion, a wide range of velocity variation with depth and depth range of the main discontinuities are given according to priori knowledge. PGARFI was verified with numerical test and applied in the inversion of the velocity structure beneath HIA station down to 700km depth.
Resumo:
Reducing the energy consumption of water distribution networks has never had more significance. The greatest energy savings can be obtained by carefully scheduling the operations of pumps. Schedules can be defined either implicitly, in terms of other elements of the network such as tank levels, or explicitly by specifying the time during which each pump is on/off. The traditional representation of explicit schedules is a string of binary values with each bit representing pump on/off status during a particular time interval. In this paper, we formally define and analyze two new explicit representations based on time-controlled triggers, where the maximum number of pump switches is established beforehand and the schedule may contain less switches than the maximum. In these representations, a pump schedule is divided into a series of integers with each integer representing the number of hours for which a pump is active/inactive. This reduces the number of potential schedules compared to the binary representation, and allows the algorithm to operate on the feasible region of the search space. We propose evolutionary operators for these two new representations. The new representations and their corresponding operations are compared with the two most-used representations in pump scheduling, namely, binary representation and level-controlled triggers. A detailed statistical analysis of the results indicates which parameters have the greatest effect on the performance of evolutionary algorithms. The empirical results show that an evolutionary algorithm using the proposed representations improves over the results obtained by a recent state-of-the-art Hybrid Genetic Algorithm for pump scheduling using level-controlled triggers.
Resumo:
Walker,J. and Wilson,M.S., 'How Useful is Lifelong Evolution for Robotics', Proceedings of the 7th International Conference on Simulation of Adaptive Behaviour, ed Hallam,B. and Floreano,D. and Hallam,J. and Hayes,G. and Meyer,J.A., pp 347-348, 2002, MIT Press
Resumo:
M. Galea and Q. Shen. Fuzzy rules from ant-inspired computation. Proceedings of the 13th International Conference on Fuzzy Systems, pages 1691-1696, 2004.
Resumo:
M. Galea and Q. Shen. Simultaneous ant colony optimisation algorithms for learning linguistic fuzzy rules. A. Abraham, C. Grosan and V. Ramos (Eds.), Swarm Intelligence in Data Mining, pages 75-99.
Resumo:
Flikkema, Edwin; Bromley, S.T., (2003) 'A new interatomic potential for nanoscale silica', Chemical Physics Letters 378(5-6) pp.622-629 RAE2008
Resumo:
Wydział Fizyki
Resumo:
In a constantly changing world, humans are adapted to alternate routinely between attending to familiar objects and testing hypotheses about novel ones. We can rapidly learn to recognize and narne novel objects without unselectively disrupting our memories of familiar ones. We can notice fine details that differentiate nearly identical objects and generalize across broad classes of dissimilar objects. This chapter describes a class of self-organizing neural network architectures--called ARTMAP-- that are capable of fast, yet stable, on-line recognition learning, hypothesis testing, and naming in response to an arbitrary stream of input patterns (Carpenter, Grossberg, Markuzon, Reynolds, and Rosen, 1992; Carpenter, Grossberg, and Reynolds, 1991). The intrinsic stability of ARTMAP allows the system to learn incrementally for an unlimited period of time. System stability properties can be traced to the structure of its learned memories, which encode clusters of attended features into its recognition categories, rather than slow averages of category inputs. The level of detail in the learned attentional focus is determined moment-by-moment, depending on predictive success: an error due to over-generalization automatically focuses attention on additional input details enough of which are learned in a new recognition category so that the predictive error will not be repeated. An ARTMAP system creates an evolving map between a variable number of learned categories that compress one feature space (e.g., visual features) to learned categories of another feature space (e.g., auditory features). Input vectors can be either binary or analog. Computational properties of the networks enable them to perform significantly better in benchmark studies than alternative machine learning, genetic algorithm, or neural network models. Some of the critical problems that challenge and constrain any such autonomous learning system will next be illustrated. Design principles that work together to solve these problems are then outlined. These principles are realized in the ARTMAP architecture, which is specified as an algorithm. Finally, ARTMAP dynamics are illustrated by means of a series of benchmark simulations.
Resumo:
A new neural network architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors. The architecture, called Fuzzy ARTMAP, achieves a synthesis of fuzzy logic and Adaptive Resonance Theory (ART) neural networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Fuzzy ARTMAP also realizes a new Minimax Learning Rule that conjointly minimizes predictive error and maximizes code compression, or generalization. This is achieved by a match tracking process that increases the ART vigilance parameter by the minimum amount needed to correct a predictive error. As a result, the system automatically learns a minimal number of recognition categories, or "hidden units", to met accuracy criteria. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy logic play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Improved prediction is achieved by training the system several times using different orderings of the input set. This voting strategy can also be used to assign probability estimates to competing predictions given small, noisy, or incomplete training sets. Four classes of simulations illustrate Fuzzy ARTMAP performance as compared to benchmark back propagation and genetic algorithm systems. These simulations include (i) finding points inside vs. outside a circle; (ii) learning to tell two spirals apart; (iii) incremental approximation of a piecewise continuous function; and (iv) a letter recognition database. The Fuzzy ARTMAP system is also compared to Salzberg's NGE system and to Simpson's FMMC system.
Resumo:
This paper introduces a few architectural concepts from FUELGEN, that generates a "cloud" of reload patterns, like the generator in the FUELCON expert system, but unlike that generator, is based on a genetic algorithm. There are indications FUELGEN may outperform FUELCON and other tools as reported in the literature, in well-researched case studies, but careful comparisons have to be carried out. This paper complements the information in two other recent papers on FUELGEN. Moreover, a sequel project is outlined.
Resumo:
The paper describes the design of an efficient and robust genetic algorithm for the nuclear fuel loading problem (i.e., refuellings: the in-core fuel management problem) - a complex combinatorial, multimodal optimisation., Evolutionary computation as performed by FUELGEN replaces heuristic search of the kind performed by the FUELCON expert system (CAI 12/4), to solve the same problem. In contrast to the traditional genetic algorithm which makes strong requirements on the representation used and its parameter setting in order to be efficient, the results of recent research results on new, robust genetic algorithms show that representations unsuitable for the traditional genetic algorithm can still be used to good effect with little parameter adjustment. The representation presented here is a simple symbolic one with no linkage attributes, making the genetic algorithm particularly easy to apply to fuel loading problems with differing core structures and assembly inventories. A nonlinear fitness function has been constructed to direct the search efficiently in the presence of the many local optima that result from the constraint on solutions.
Resumo:
Abstract To achieve higher flexibility and to better satisfy actual customer requirements, there is an increasing tendency to develop and deliver software in an incremental fashion. In adopting this process, requirements are delivered in releases and so a decision has to be made on which requirements should be delivered in which release. Three main considerations that need to be taken account of are the technical precedences inherent in the requirements, the typically conflicting priorities as determined by the representative stakeholders, as well as the balance between required and available effort. The technical precedence constraints relate to situations where one requirement cannot be implemented until another is completed or where one requirement is implemented in the same increment as another one. Stakeholder preferences may be based on the perceived value or urgency of delivered requirements to the different stakeholders involved. The technical priorities and individual stakeholder priorities may be in conflict and difficult to reconcile. This paper provides (i) a method for optimally allocating requirements to increments; (ii) a means of assessing and optimizing the degree to which the ordering conflicts with stakeholder priorities within technical precedence constraints; (iii) a means of balancing required and available resources for all increments; and (iv) an overall method called EVOLVE aimed at the continuous planning of incremental software development. The optimization method used is iterative and essentially based on a genetic algorithm. A set of the most promising candidate solutions is generated to support the final decision. The paper evaluates the proposed approach using a sample project.
Resumo:
Polymer extrusion is a complex process and the availability of good dynamic models is key for improved system operation. Previous modelling attempts have failed adequately to capture the non-linearities of the process or prove too complex for control applications. This work presents a novel approach to the problem by the modelling of extrusion viscosity and pressure, adopting a grey box modelling technique that combines mechanistic knowledge with empirical data using a genetic algorithm approach. The models are shown to outperform those of a much higher order generated by a conventional black box technique while providing insight into the underlying processes at work within the extruder.
Resumo:
We have studied the optical spectra of a sample of 31 O- and early B-type stars in the Small Magellanic Cloud, 21 of which are associated with the young massive cluster NGC 346. Stellar parameters are determined using an automated fitting method (Mokiem et al. 2005, A&A, 441, 711), which combines the stellar atmosphere code FASTWIND (Puls et al. 2005, A&A, 435, 669) with the genetic algorithm based optimisation routine PIKAIA (Charbonneau 1995, ApJS, 101, 309). Comparison with predictions of stellar evolution that account for stellar rotation does not result in a unique age, though most stars are best represented by an age of 1-3 Myr. The automated method allows for a detailed determination of the projected rotational velocities. The present day v(r) sin i distribution of the 21 dwarf stars in our sample is consistent with an underlying rotational velocity (v(r)) distribution that can be characterised by a mean velocity of about 160-190 km s(-1) and an effective half width of 100-150 km s(-1). The vr distribution must include a small percentage of slowly rotating stars. If predictions of the time evolution of the equatorial velocity for massive stars within the environment of the SMC are correct (Maeder & Meynet 2001, A&A, 373, 555), the young age of the cluster implies that this underlying distribution is representative for the initial rotational velocity distribution. The location in the Hertzsprung-Russell diagram of the stars showing helium enrichment is in qualitative agreement with evolutionary tracks accounting for rotation, but not for those ignoring vr. The mass loss rates of the SMC objects having luminosities of log L-star/L-circle dot greater than or similar to 5.4 are in excellent agreement with predictions by Vink et al. (2001, A&A, 369, 574). However, for lower luminosity stars the winds are too weak to determine. M accurately from the optical spectrum. Three targets were classified as Vz stars, two of which are located close to the theoretical zero-age main sequence. Three lower luminosity targets that were not classified as Vz stars are also found to lie near the ZAMS. We argue that this is related to a temperature effect inhibiting cooler from displaying the spectral features required for the Vz luminosity class.