849 resultados para Real genetic algorithm
Resumo:
This paper is concerned with the selection of inputs for classification models based on ratios of measured quantities. For this purpose, all possible ratios are built from the quantities involved and variable selection techniques are used to choose a convenient subset of ratios. In this context, two selection techniques are proposed: one based on a pre-selection procedure and another based on a genetic algorithm. In an example involving the financial distress prediction of companies, the models obtained from ratios selected by the proposed techniques compare favorably to a model using ratios usually found in the financial distress literature.
Resumo:
The synapsing variable-length crossover (SVLC algorithm provides a biologically inspired method for performing meaningful crossover between variable-length genomes. In addition to providing a rationale for variable-length crossover, it also provides a genotypic similarity metric for variable-length genomes, enabling standard niche formation techniques to be used with variable-length genomes. Unlike other variable-length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses non-random crossover points based on the common parental sequence similarity. The SVLC algorithm recurrently "glues" or synapses homogenous genetic subsequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable-length test problem, the SVLC algorithm compares favorably with current variable-length crossover techniques. The variable-length approach is further advocated by demonstrating how a variable-length genetic algorithm (GA) can obtain a high fitness solution in fewer iterations than a traditional fixed-length GA in a two-dimensional vector approximation task.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
The Richards equation has been widely used for simulating soil water movement. However, the take-up of agro-hydrological models using the basic theory of soil water flow for optimizing irrigation, fertilizer and pesticide practices is still low. This is partly due to the difficulties in obtaining accurate values for soil hydraulic properties at a field scale. Here, we use an inverse technique to deduce the effective soil hydraulic properties, based on measuring the changes in the distribution of soil water with depth in a fallow field over a long period, subject to natural rainfall and evaporation using a robust micro Genetic Algorithm. A new optimized function was constructed from the soil water contents at different depths, and the soil water at field capacity. The deduced soil water retention curve was approximately parallel but higher than that derived from published pedo-tranfer functions for a given soil pressure head. The water contents calculated from the deduced soil hydraulic properties were in good agreement with the measured values. The reliability of the deduced soil hydraulic properties was tested in reproducing data measured from an independent experiment on the same soil cropped with leek. The calculation of root water uptake took account for both soil water potential and root density distribution. Results show that the predictions of soil water contents at various depths agree fairly well with the measurements, indicating that the inverse analysis is an effective and reliable approach to estimate soil hydraulic properties, and thus permits the simulation of soil water dynamics in both cropped and fallow soils in the field accurately. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Background: The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. New method: We propose a complete pipeline for the cluster analysis of ERP data. To increase the signalto-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA)to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). Results: After validating the pipeline on simulated data, we tested it on data from two experiments – a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership.
Resumo:
In this paper, we present an algorithm for cluster analysis that integrates aspects from cluster ensemble and multi-objective clustering. The algorithm is based on a Pareto-based multi-objective genetic algorithm, with a special crossover operator, which uses clustering validation measures as objective functions. The algorithm proposed can deal with data sets presenting different types of clusters, without the need of expertise in cluster analysis. its result is a concise set of partitions representing alternative trade-offs among the objective functions. We compare the results obtained with our algorithm, in the context of gene expression data sets, to those achieved with multi-objective Clustering with automatic K-determination (MOCK). the algorithm most closely related to ours. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Over the useful life of a LAN, network downtimes will have a negative impact on organizational productivity not included in current Network Topological Design (NTD) problems. We propose a new approach to LAN topological design that includes the impact of these productivity losses into the network design, minimizing not only the CAPEX but also the expected cost of unproductiveness attributable to network downtimes over a certain period of network operation.
Resumo:
This work demonstrates that the detuning of the fs-laser spectrum from the two-photon absorption band of organic materials can be used to reach further control of the two-photon absorption by pulse spectral phase manipulation. We investigate the coherent control of the two-photon absorption in imidazole-thiophene core compounds presenting distinct two-photon absorption spectra. The coherent control, performed using pulse phase shaping and genetic algorithm, exhibited different growth rates for each sample. Such distinct trends were explained by calculating the two-photon absorption probability considering the intrapulse interference mechanism, taking into account the two-photon absorption spectrum of the samples. Our results indicate that tuning the relative position between the nonlinear absorption and the pulse spectrum can be used as a novel strategy to optimize the two-photon absorption in broadband molecular systems. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Genetic algorithms are commonly used to solve combinatorial optimizationproblems. The implementation evolves using genetic operators (crossover, mutation,selection, etc.). Anyway, genetic algorithms like some other methods have parameters(population size, probabilities of crossover and mutation) which need to be tune orchosen.In this paper, our project is based on an existing hybrid genetic algorithmworking on the multiprocessor scheduling problem. We propose a hybrid Fuzzy-Genetic Algorithm (FLGA) approach to solve the multiprocessor scheduling problem.The algorithm consists in adding a fuzzy logic controller to control and tunedynamically different parameters (probabilities of crossover and mutation), in anattempt to improve the algorithm performance. For this purpose, we will design afuzzy logic controller based on fuzzy rules to control the probabilities of crossoverand mutation. Compared with the Standard Genetic Algorithm (SGA), the resultsclearly demonstrate that the FLGA method performs significantly better.
Resumo:
Nowadays in the world of mass consumption there is big demand for distributioncenters of bigger size. Managing such a center is a very complex and difficult taskregarding to the different processes and factors in a usual warehouse when we want tominimize the labor costs. Most of the workers’ working time is spent with travelingbetween source and destination points which cause deadheading. Even if a worker knowsthe structure of a warehouse well and because of that he or she can find the shortest pathbetween two points, it is still not guaranteed that there won’t be long traveling timebetween the locations of two consecutive tasks. We need optimal assignments betweentasks and workers.In the scientific literature Generalized Assignment Problem (GAP) is a wellknownproblem which deals with the assignment of m workers to n tasks consideringseveral constraints. The primary purpose of my thesis project was to choose a heuristics(genetic algorithm, tabu search or ant colony optimization) to be implemented into SAPExtended Warehouse Management (SAP EWM) by with task assignment will be moreeffective between tasks and resources.After system analysis I had to realize that due different constraints and businessdemands only 1:1 assingments are allowed in SAP EWM. Because of that I had to use adifferent and simpler approach – instead of the introduced heuristics – which could gainbetter assignments during the test phase in several cases. In the thesis I described indetails what ware the most important questions and problems which emerged during theplanning of my optimized assignment method.
Resumo:
Since the last decade the problem of surface inspection has been receiving great attention from the scientific community, the quality control and the maintenance of products are key points in several industrial applications.The railway associations spent much money to check the railway infrastructure. The railway infrastructure is a particular field in which the periodical surface inspection can help the operator to prevent critical situations. The maintenance and monitoring of this infrastructure is an important aspect for railway association.That is why the surface inspection of railway also makes importance to the railroad authority to investigate track components, identify problems and finding out the way that how to solve these problems. In railway industry, usually the problems find in railway sleepers, overhead, fastener, rail head, switching and crossing and in ballast section as well. In this thesis work, I have reviewed some research papers based on AI techniques together with NDT techniques which are able to collect data from the test object without making any damage. The research works which I have reviewed and demonstrated that by adopting the AI based system, it is almost possible to solve all the problems and this system is very much reliable and efficient for diagnose problems of this transportation domain. I have reviewed solutions provided by different companies based on AI techniques, their products and reviewed some white papers provided by some of those companies. AI based techniques likemachine vision, stereo vision, laser based techniques and neural network are used in most cases to solve the problems which are performed by the railway engineers.The problems in railway handled by the AI based techniques performed by NDT approach which is a very broad, interdisciplinary field that plays a critical role in assuring that structural components and systems perform their function in a reliable and cost effective fashion. The NDT approach ensures the uniformity, quality and serviceability of materials without causing any damage of that materials is being tested. This testing methods use some way to test product like, Visual and Optical testing, Radiography, Magnetic particle testing, Ultrasonic testing, Penetrate testing, electro mechanic testing and acoustic emission testing etc. The inspection procedure has done periodically because of better maintenance. This inspection procedure done by the railway engineers manually with the aid of AI based techniques.The main idea of thesis work is to demonstrate how the problems can be reduced of thistransportation area based on the works done by different researchers and companies. And I have also provided some ideas and comments according to those works and trying to provide some proposal to use better inspection method where it is needed.The scope of this thesis work is automatic interpretation of data from NDT, with the goal of detecting flaws accurately and efficiently. AI techniques such as neural networks, machine vision, knowledge-based systems and fuzzy logic were applied to a wide spectrum of problems in this area. Another scope is to provide an insight into possible research methods concerning railway sleeper, fastener, ballast and overhead inspection by automatic interpretation of data.In this thesis work, I have discussed about problems which are arise in railway sleepers,fastener, and overhead and ballasted track. For this reason I have reviewed some research papers related with these areas and demonstrated how their systems works and the results of those systems. After all the demonstrations were taking place of the advantages of using AI techniques in contrast with those manual systems exist previously.This work aims to summarize the findings of a large number of research papers deploying artificial intelligence (AI) techniques for the automatic interpretation of data from nondestructive testing (NDT). Problems in rail transport domain are mainly discussed in this work. The overall work of this paper goes to the inspection of railway sleepers, fastener, ballast and overhead.
Resumo:
The context of this report and the IRIDIA laboratory are described in the preface. Evolutionary Robotics and the box-pushing task are presented in the introduction.The building of a test system supporting Evolutionary Robotics experiments is then detailed. This system is made of a robot simulator and a Genetic Algorithm. It is used to explore the possibility of evolving box-pushing behaviours. The bootstrapping problem is explained, and a novel approach for dealing with it is proposed, with results presented.Finally, ideas for extending this approach are presented in the conclusion.
Resumo:
Most of water distribution systems (WDS) need rehabilitation due to aging infrastructure leading to decreasing capacity, increasing leakage and consequently low performance of the WDS. However an appropriate strategy including location and time of pipeline rehabilitation in a WDS with respect to a limited budget is the main challenge which has been addressed frequently by researchers and practitioners. On the other hand, selection of appropriate rehabilitation technique and material types is another main issue which has yet to address properly. The latter can affect the environmental impacts of a rehabilitation strategy meeting the challenges of global warming mitigation and consequent climate change. This paper presents a multi-objective optimization model for rehabilitation strategy in WDS addressing the abovementioned criteria mainly focused on greenhouse gas (GHG) emissions either directly from fossil fuel and electricity or indirectly from embodied energy of materials. Thus, the objective functions are to minimise: (1) the total cost of rehabilitation including capital and operational costs; (2) the leakage amount; (3) GHG emissions. The Pareto optimal front containing optimal solutions is determined using Non-dominated Sorting Genetic Algorithm NSGA-II. Decision variables in this optimisation problem are classified into a number of groups as: (1) percentage proportion of each rehabilitation technique each year; (2) material types of new pipeline for rehabilitation each year. Rehabilitation techniques used here includes replacement, rehabilitation and lining, cleaning, pipe duplication. The developed model is demonstrated through its application to a Mahalat WDS located in central part of Iran. The rehabilitation strategy is analysed for a 40 year planning horizon. A number of conventional techniques for selecting pipes for rehabilitation are analysed in this study. The results show that the optimal rehabilitation strategy considering GHG emissions is able to successfully save the total expenses, efficiently decrease the leakage amount from the WDS whilst meeting environmental criteria.
Resumo:
This thesis provides three original contributions to the field of Decision Sciences. The first contribution explores the field of heuristics and biases. New variations of the Cognitive Reflection Test (CRT--a test to measure "the ability or disposition to resist reporting the response that first comes to mind"), are provided. The original CRT (S. Frederick [2005] Journal of Economic Perspectives, v. 19:4, pp.24-42) has items in which the response is immediate--and erroneous. It is shown that by merely varying the numerical parameters of the problems, large deviations in response are found. Not only the final results are affected by the proposed variations, but so is processing fluency. It seems that numbers' magnitudes serve as a cue to activate system-2 type reasoning. The second contribution explores Managerial Algorithmics Theory (M. Moldoveanu [2009] Strategic Management Journal, v. 30, pp. 737-763); an ambitious research program that states that managers display cognitive choices with a "preference towards solving problems of low computational complexity". An empirical test of this hypothesis is conducted, with results showing that this premise is not supported. A number of problems are designed with the intent of testing the predictions from managerial algorithmics against the predictions of cognitive psychology. The results demonstrate (once again) that framing effects profoundly affect choice, and (an original insight) that managers are unable to distinguish computational complexity problem classes. The third contribution explores a new approach to a computationally complex problem in marketing: the shelf space allocation problem (M-H Yang [2001] European Journal of Operational Research, v. 131, pp.107--118). A new representation for a genetic algorithm is developed, and computational experiments demonstrate its feasibility as a practical solution method. These studies lie at the interface of psychology and economics (with bounded rationality and the heuristics and biases programme), psychology, strategy, and computational complexity, and heuristics for computationally hard problems in management science.