895 resultados para Genetic Algorithms, Adaptation, Internet Computing
Resumo:
We consider the often-studied problem of sorting, for a parallel computer. Given an input array distributed evenly over p processors, the task is to compute the sorted output array, also distributed over the p processors. Many existing algorithms take the approach of approximately load-balancing the output, leaving each processor with Θ(n/p) elements. However, in many cases, approximate load-balancing leads to inefficiencies in both the sorting itself and in further uses of the data after sorting. We provide a deterministic parallel sorting algorithm that uses parallel selection to produce any output distribution exactly, particularly one that is perfectly load-balanced. Furthermore, when using a comparison sort, this algorithm is 1-optimal in both computation and communication. We provide an empirical study that illustrates the efficiency of exact data splitting, and shows an improvement over two sample sort algorithms.
Resumo:
We compare a broad range of optimal product line design methods. The comparisons take advantage of recent advances that make it possible to identify the optimal solution to problems that are too large for complete enumeration. Several of the methods perform surprisingly well, including Simulated Annealing, Product-Swapping and Genetic Algorithms. The Product-Swapping heuristic is remarkable for its simplicity. The performance of this heuristic suggests that the optimal product line design problem may be far easier to solve in practice than indicated by complexity theory.
Resumo:
El presente proyecto tiene como objeto identificar cuáles son los conceptos de salud, enfermedad, epidemiología y riesgo aplicables a las empresas del sector de extracción de petróleo y gas natural en Colombia. Dado, el bajo nivel de predicción de los análisis financieros tradicionales y su insuficiencia, en términos de inversión y toma de decisiones a largo plazo, además de no considerar variables como el riesgo y las expectativas de futuro, surge la necesidad de abordar diferentes perspectivas y modelos integradores. Esta apreciación es pertinente dentro del sector de extracción de petróleo y gas natural, debido a la creciente inversión extranjera que ha reportado, US$2.862 millones en el 2010, cifra mayor a diez veces su valor en el año 2003. Así pues, se podrían desarrollar modelos multi-dimensional, con base en los conceptos de salud financiera, epidemiológicos y estadísticos. El termino de salud y su adopción en el sector empresarial, resulta útil y mantiene una coherencia conceptual, evidenciando una presencia de diferentes subsistemas o factores interactuantes e interconectados. Es necesario mencionar también, que un modelo multidimensional (multi-stage) debe tener en cuenta el riesgo y el análisis epidemiológico ha demostrado ser útil al momento de determinarlo e integrarlo en el sistema junto a otros conceptos, como la razón de riesgo y riesgo relativo. Esto se analizará mediante un estudio teórico-conceptual, que complementa un estudio previo, para contribuir al proyecto de finanzas corporativas de la línea de investigación en Gerencia.
Resumo:
La creciente preocupación y concienciación de la sociedad respecto el medio ambiente, y en consecuencia la legislación y regulaciones generadas inducen a la modificación de los procesos productivos existentes en la industria química. Las configuraciones iniciales deben modificarse para conseguir una mayor integración de procesos. Para este fin se han creado y desarrollado diferentes metodologías que deben facilitar la tarea a los responsables del rediseño. El desarrollo de una metodología y herramientas complementarias es el principal objetivo de la investigación aquí presentada, especialmente centrada en el desarrollo y la aplicación de una metodología de optimización de procesos. Esta metodología de optimización se aplica sobre configuraciones de proceso existentes y pretende encontrar nuevas configuraciones viables según los objetivos de optimización fijados. La metodología tiene dos partes diferenciadas: la primera se basa en un simulador de procesos comercial y la segunda es la técnica de optimización propiamente dicha. La metodología se inicia con la elaboración de una simulación convenientemente validada que reproduzca el proceso existente, en este caso una papelera no integrada que produce papel estucado de calidad, para impresión. A continuación la técnica de optimización realiza una búsqueda dentro del dominio de los posibles resultados, en busca de los mejores resultados que satisfazcan plenamente los objetivos planteados. Dicha técnica de optimización está basada en los algoritmos genéticos como herramienta de búsqueda, junto a un subprograma basado en técnicas de programación matemática para el cálculo de resultados. Un número reducido de resultados son finalmente escogidos y utilizados para modificar la simulación existente fijando la redistribución de los flujos del proceso. Los resultados de la simulación del proceso determinan en último caso la viabilidad técnica de cada reconfiguración planteada. En el proceso de optimización, los objetivos están definidos en una función objetivo dentro de la técnica de optimización. Dicha función rige la búsqueda de resultados. La función objetivo puede ser individual o una combinación de objetivos. En el presente caso, la función persigue una minimización del consumo de agua y una minimización de la pérdida de materia prima. La optimización se realiza bajo restricciones para alcanzar este objetivo combinado en forma de una solución de compromiso. Producto de la aplicación de esta metodología se han obtenido resultados interesantes que significan una mejora del cierre de circuitos y un ahorro de materia prima, sin comprometer al mismo tiempo la operabilidad del proceso producto ni la calidad del papel.
Resumo:
The availability of a network strongly depends on the frequency of service outages and the recovery time for each outage. The loss of network resources includes complete or partial failure of hardware and software components, power outages, scheduled maintenance such as software and hardware, operational errors such as configuration errors and acts of nature such as floods, tornadoes and earthquakes. This paper proposes a practical approach to the enhancement of QoS routing by means of providing alternative or repair paths in the event of a breakage of a working path. The proposed scheme guarantees that every Protected Node (PN) is connected to a multi-repair path such that no further failure or breakage of single or double repair paths can cause any simultaneous loss of connectivity between an ingress node and an egress node. Links to be protected in an MPLS network are predefined and an LSP request involves the establishment of a working path. The use of multi-protection paths permits the formation of numerous protection paths allowing greater flexibility. Our analysis will examine several methods including single, double and multi-repair routes and the prioritization of signals along the protected paths to improve the Quality of Service (QoS), throughput, reduce the cost of the protection path placement, delay, congestion and collision.
Resumo:
Biological Crossover occurs during the early stages of meiosis. During this process the chromosomes undergoing crossover are synapsed together at a number of homogenous sequence sections, it is within such synapsed sections that crossover occurs. The SVLC (Synapsing Variable Length Crossover) Algorithm recurrently synapses homogenous genetic sequences together in order of length. The genomes are considered to be flexible with crossover only being permitted within the synapsed sections. Consequently, common sequences are automatically preserved with only the genetic differences being exchanged, independent of the length of such differences. In addition to providing a rationale for variable length crossover it also provides a genotypic similarity metric for variable length genomes enabling standard niche formation techniques to be utilised. In a simple variable length test problem the SVLC algorithm outperforms current variable length crossover techniques.
Resumo:
Differential Evolution (DE) is a tool for efficient optimisation, and it belongs to the class of evolutionary algorithms, which include Evolution Strategies and Genetic Algorithms. DE algorithms work well when the population covers the entire search space, and they have shown to be effective on a large range of classical optimisation problems. However, an undesirable behaviour was detected when all the members of the population are in a basin of attraction of a local optimum (local minimum or local maximum), because in this situation the population cannot escape from it. This paper proposes a modification of the standard mechanisms in DE algorithm in order to change the exploration vs. exploitation balance to improve its behaviour.
Resumo:
Asynchronous Optical Sampling has the potential to improve signal to noise ratio in THz transient sperctrometry. The design of an inexpensive control scheme for synchronising two femtosecond pulse frequency comb generators at an offset frequency of 20 kHz is discussed. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing recorded THz transients in the time and frequency domain are outlined. Finally, possibilities for femtosecond pulse shaping using genetic algorithms are mentioned.
Resumo:
A program is provided to determine structural parameters of atoms in or adsorbed on surfaces by refinement of atomistic models towards experimentally determined data generated by the normal incidence X-ray standing wave (NIXSW) technique. The method employs a combination of Differential Evolution Genetic Algorithms and Steepest Descent Line Minimisations to provide a fast, reliable and user friendly tool for experimentalists to interpret complex multidimensional NIXSW data sets.
Resumo:
The authors describe a learning classifier system (LCS) which employs genetic algorithms (GA) for adaptive online diagnosis of power transmission network faults. The system monitors switchgear indications produced by a transmission network, reporting fault diagnoses on any patterns indicative of faulted components. The system evaluates the accuracy of diagnoses via a fault simulator developed by National Grid Co. and adapts to reflect the current network topology by use of genetic algorithms.
Resumo:
Clustering is a difficult task: there is no single cluster definition and the data can have more than one underlying structure. Pareto-based multi-objective genetic algorithms (e.g., MOCK Multi-Objective Clustering with automatic K-determination and MOCLE-Multi-Objective Clustering Ensemble) were proposed to tackle these problems. However, the output of such algorithms can often contains a high number of partitions, becoming difficult for an expert to manually analyze all of them. In order to deal with this problem, we present two selection strategies, which are based on the corrected Rand, to choose a subset of solutions. To test them, they are applied to the set of solutions produced by MOCK and MOCLE in the context of several datasets. The study was also extended to select a reduced set of partitions from the initial population of MOCLE. These analysis show that both versions of selection strategy proposed are very effective. They can significantly reduce the number of solutions and, at the same time, keep the quality and the diversity of the partitions in the original set of solutions. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present an algorithm for cluster analysis that integrates aspects from cluster ensemble and multi-objective clustering. The algorithm is based on a Pareto-based multi-objective genetic algorithm, with a special crossover operator, which uses clustering validation measures as objective functions. The algorithm proposed can deal with data sets presenting different types of clusters, without the need of expertise in cluster analysis. its result is a concise set of partitions representing alternative trade-offs among the objective functions. We compare the results obtained with our algorithm, in the context of gene expression data sets, to those achieved with multi-objective Clustering with automatic K-determination (MOCK). the algorithm most closely related to ours. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Support vector machines (SVMs) were originally formulated for the solution of binary classification problems. In multiclass problems, a decomposition approach is often employed, in which the multiclass problem is divided into multiple binary subproblems, whose results are combined. Generally, the performance of SVM classifiers is affected by the selection of values for their parameters. This paper investigates the use of genetic algorithms (GAs) to tune the parameters of the binary SVMs in common multiclass decompositions. The developed GA may search for a set of parameter values common to all binary classifiers or for differentiated values for each binary classifier. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Cannabinoid compounds have widely been employed because of its medicinal and psychotropic properties. These compounds are isolated from Cannabis sativa (or marijuana) and are used in several medical treatments, such as glaucoma, nausea associated to chemotherapy, pain and many other situations. More recently, its use as appetite stimulant has been indicated in patients with cachexia or AIDS. In this work, the influence of several molecular descriptors on the psychoactivity of 50 cannabinoid compounds is analyzed aiming one obtain a model able to predict the psychoactivity of new cannabinoids. For this purpose, initially, the selection of descriptors was carried out using the Fisher`s weight, the correlation matrix among the calculated variables and principal component analysis. From these analyses, the following descriptors have been considered more relevant: E(LUMO) (energy of the lowest unoccupied molecular orbital), Log P (logarithm of the partition coefficient), VC4 (volume of the substituent at the C4 position) and LP1 (Lovasz-Pelikan index, a molecular branching index). To follow, two neural network models were used to construct a more adequate model for classifying new cannabinoid compounds. The first model employed was multi-layer perceptrons, with algorithm back-propagation, and the second model used was the Kohonen network. The results obtained from both networks were compared and showed that both techniques presented a high percentage of correctness to discriminate psychoactive and psychoinactive compounds. However, the Kohonen network was superior to multi-layer perceptrons.