798 resultados para annealing algorithm
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.
Resumo:
Os algoritmos baseados no paradigma Simulated Annealing e suas variações são atualmente usados de forma ampla na resolução de problemas de otimização de larga escala. Esta popularidade é resultado da estrutura extremamente simples e aparentemente universal dos algoritmos, da aplicabilidade geral e da habilidade de fornecer soluções bastante próximas da ótima. No início da década de 80, Kirkpatrick e outros apresentaram uma proposta de utilização dos conceitos de annealing (resfriamento lento e controlado de sólidos) em otimização combinatória. Esta proposta considera a forte analogia entre o processo físico de annealing e a resolução de problemas grandes de otimização combinatória. Simulated Annealing (SA) é um denominação genérica para os algoritmos desenvolvidos com base nesta proposta. Estes algoritmos combinam técnicas de busca local e de randomização. O objetivo do presente trabalho é proporcionar um entendimento das características do Simulated Annealing e facilitar o desenvolvimento de algoritmos com estas características. Assim, é apresentado como Simulated Annealing e suas variações estão sendo utilizados na resolução de problemas de otimização combinatória, proposta uma formalização através de um método de desenvolvimento de algoritmos e analisados aspectos de complexidade. O método de desenvolvimento especifica um programa abstrato para um algoritmo Simulated Annealing seqüencial, identifica funções e predicados que constituem os procedimentos deste programa abstrato e estabelece axiomas que permitem a visualização das propriedades que estes procedimentos devem satisfazer. A complexidade do Simulated Annealing é analisada a partir do programa abstrato desenvolvido e de seus principais procedimentos, permitindo o estabelecimento de uma equação genérica para a complexidade. Esta equação genérica é aplicável aos algoritmos desenvolvidos com base no método proposto. Uma prova de correção é apresentada para o programa abstrato e um código exemplo é analisado com relação aos axiomas estabelecidos. O estabelecimento de axiomas tem como propósito definir uma semântica para o algoritmo, o que permite a um desenvolvedor analisar a correção do código especificado para um algoritmo levando em consideração estes axiomas. O trabalho foi realizado a partir de um estudo introdutório de otimização combinatória, de técnicas de resolução de problemas, de um levantamento histórico do uso do Simulated Annealing, das variações em torno do modelo e de embasamentos matemáticos documentados. Isto permitiu identificar as características essenciais dos algoritmos baseados no paradigma, analisar os aspectos relacionados com estas características, como as diferentes formas de realizar uma prescrição de resfriamento e percorrer um espaço de soluções, e construir a fundamentação teórica genérica proposta.
Resumo:
Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.
Resumo:
LEÃO, Adriano de Castro; DÓRIA NETO, Adrião Duarte; SOUSA, Maria Bernardete Cordeiro de. New developmental stages for common marmosets (Callithrix jacchus) using mass and age variables obtained by K-means algorithm and self-organizing maps (SOM). Computers in Biology and Medicine, v. 39, p. 853-859, 2009
Resumo:
The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.
Resumo:
This paper presents a method for automatic identification of dust devils tracks in MOC NA and HiRISE images of Mars. The method is based on Mathematical Morphology and is able to successfully process those images despite their difference in spatial resolution or size of the scene. A dataset of 200 images from the surface of Mars representative of the diversity of those track features was considered for developing, testing and evaluating our method, confronting the outputs with reference images made manually. Analysis showed a mean accuracy of about 92%. We also give some examples on how to use the results to get information about dust devils, namelly mean width, main direction of movement and coverage per scene. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a methodology for automatic extraction of building roof contours from a Digital Elevation Model (DEM), which is generated through the regularization of an available laser point cloud. The methodology is based on two steps. First, in order to detect high objects (buildings, trees etc.), the DEM is segmented through a recursive splitting technique and a Bayesian merging technique. The recursive splitting technique uses the quadtree structure for subdividing the DEM into homogeneous regions. In order to minimize the fragmentation, which is commonly observed in the results of the recursive splitting segmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. The high object polygons are extracted by using vectorization and polygonization techniques. Second, the building roof contours are identified among all high objects extracted previously. Taking into account some roof properties and some feature measurements (e. g., area, rectangularity, and angles between principal axes of the roofs), an energy function was developed based on the Markov Random Field (MRF) model. The solution of this function is a polygon set corresponding to building roof contours and is found by using a minimization technique, like the Simulated Annealing (SA) algorithm. Experiments carried out with laser scanning DEM's showed that the methodology works properly, as it delivered roof contours with approximately 90% shape accuracy and no false positive was verified.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)