954 resultados para tabu search algorithm
Resumo:
Heuristics for stochastic and dynamic vehicle routing problems are often kept relatively simple, in part due to the high computational burden resulting from having to consider stochastic information in some form. In this work, three existing heuristics are extended by three different local search variations: a first improvement descent using stochastic information, a tabu search using stochastic information when updating the incumbent solution, and a tabu search using stochastic information when selecting moves based on a list of moves determined through a proxy evaluation. In particular, the three local search variations are designed to utilize stochastic information in the form of sampled scenarios. The results indicate that adding local search using stochastic information to the existing heuristics can further reduce operating costs for shipping companies by 0.5–2 %. While the existing heuristics could produce structurally different solutions even when using similar stochastic information in the search, the appended local search methods seem able to make the final solutions more similar in structure.
Resumo:
Hardware/Software partitioning (HSP) is a key task for embedded system co-design. The main goal of this task is to decide which components of an application are to be executed in a general purpose processor (software) and which ones, on a specific hardware, taking into account a set of restrictions expressed by metrics. In last years, several approaches have been proposed for solving the HSP problem, directed by metaheuristic algorithms. However, due to diversity of models and metrics used, the choice of the best suited algorithm is an open problem yet. This article presents the results of applying a fuzzy approach to the HSP problem. This approach is more flexible than many others due to the fact that it is possible to accept quite good solutions or to reject other ones which do not seem good. In this work we compare six metaheuristic algorithms: Random Search, Tabu Search, Simulated Annealing, Hill Climbing, Genetic Algorithm and Evolutionary Strategy. The presented model is aimed to simultaneously minimize the hardware area and the execution time. The obtained results show that Restart Hill Climbing is the best performing algorithm in most cases.
Resumo:
El particionado hardware/software es una tarea fundamental en el co-diseño de sistemas embebidos. En ella se decide, teniendo en cuenta las métricas de diseño, qué componentes se ejecutarán en un procesador de propósito general (software) y cuáles en un hardware específico. En los últimos años se han propuesto diversas soluciones al problema del particionado dirigidas por algoritmos metaheurísticos. Sin embargo, debido a la diversidad de modelos y métricas utilizadas, la elección del algoritmo más apropiado sigue siendo un problema abierto. En este trabajo se presenta una comparación de seis algoritmos metaheurísticos: Búsqueda aleatoria (Random search), Búsqueda tabú (Tabu search), Recocido simulado (Simulated annealing), Escalador de colinas estocástico (Stochastic hill climbing), Algoritmo genético (Genetic algorithm) y Estrategia evolutiva (Evolution strategy). El modelo utilizado en la comparación está dirigido a minimizar el área ocupada y el tiempo de ejecución, las restricciones del modelo son consideradas como penalizaciones para incluir en el espacio de búsqueda otras soluciones. Los resultados muestran que los algoritmos Escalador de colinas estocástico y Estrategia evolutiva son los que mejores resultados obtienen en general, seguidos por el Algoritmo genético.
Resumo:
The field of linear optical quantum computation (LOQC) will soon need a repertoire of experimental milestones. We make progress in this direction by describing several experiments based on Grover's algorithm. These experiments range from a relatively simple implementation using only a single nonscalable controlled- NOT (CNOT) gate to the most complex, requiring two concatenated scalable CNOT gates, and thus form a useful set of early milestones for LOQC. We also give a complete description of basic LOQC using polarization-encoded qubits, making use of many simplifications to the original scheme of Knill, Laflamme, and Milburn [E. Knill, R. Laflamme, and G. J. Milburn, Nature (London) 409, 46 (2001)].
Resumo:
This paper considers the problem of concept generalization in decision-making systems where such features of real-world databases as large size, incompleteness and inconsistence of the stored information are taken into account. The methods of the rough set theory (like lower and upper approximations, positive regions and reducts) are used for the solving of this problem. The new discretization algorithm of the continuous attributes is proposed. It essentially increases an overall performance of generalization algorithms and can be applied to processing of real value attributes in large data tables. Also the search algorithm of the significant attributes combined with a stage of discretization is developed. It allows avoiding splitting of continuous domains of insignificant attributes into intervals.
Resumo:
ACM Computing Classification System (1998): I.2.8, G.1.6.
Resumo:
In recent years genetic algorithms have emerged as a useful tool for the heuristic solution of complex discrete optimisation problems. In particular there has been considerable interest in their use in tackling problems arising in the areas of scheduling and timetabling. However, the classical genetic algorithm paradigm is not well equipped to handle constraints and successful implementations usually require some sort of modification to enable the search to exploit problem specific knowledge in order to overcome this shortcoming. This paper is concerned with the development of a family of genetic algorithms for the solution of a nurse rostering problem at a major UK hospital. The hospital is made up of wards of up to 30 nurses. Each ward has its own group of nurses whose shifts have to be scheduled on a weekly basis. In addition to fulfilling the minimum demand for staff over three daily shifts, nurses’ wishes and qualifications have to be taken into account. The schedules must also be seen to be fair, in that unpopular shifts have to be spread evenly amongst all nurses, and other restrictions, such as team nursing and special conditions for senior staff, have to be satisfied. The basis of the family of genetic algorithms is a classical genetic algorithm consisting of n-point crossover, single-bit mutation and a rank-based selection. The solution space consists of all schedules in which each nurse works the required number of shifts, but the remaining constraints, both hard and soft, are relaxed and penalised in the fitness function. The talk will start with a detailed description of the problem and the initial implementation and will go on to highlight the shortcomings of such an approach, in terms of the key element of balancing feasibility, i.e. covering the demand and work regulations, and quality, as measured by the nurses’ preferences. A series of experiments involving parameter adaptation, niching, intelligent weights, delta coding, local hill climbing, migration and special selection rules will then be outlined and it will be shown how a series of these enhancements were able to eradicate these difficulties. Results based on several months’ real data will be used to measure the impact of each modification, and to show that the final algorithm is able to compete with a tabu search approach currently employed at the hospital. The talk will conclude with some observations as to the overall quality of this approach to this and similar problems.
Resumo:
International audience
Resumo:
International audience
Resumo:
Non-orthogonal multiple access (NOMA) is emerging as a promising multiple access technology for the fifth generation cellular networks to address the fast growing mobile data traffic. It applies superposition coding in transmitters, allowing simultaneous allocation of the same frequency resource to multiple intra-cell users. Successive interference cancellation is used at the receivers to cancel intra-cell interference. User pairing and power allocation (UPPA) is a key design aspect of NOMA. Existing UPPA algorithms are mainly based on exhaustive search method with extensive computation complexity, which can severely affect the NOMA performance. A fast proportional fairness (PF) scheduling based UPPA algorithm is proposed to address the problem. The novel idea is to form user pairs around the users with the highest PF metrics with pre-configured fixed power allocation. Systemlevel simulation results show that the proposed algorithm is significantly faster (seven times faster for the scenario with 20 users) with a negligible throughput loss than the existing exhaustive search algorithm.
Resumo:
Les travaux de ce mémoire traitent du problème d’ordonnancement et d’optimisation de la production dans un environnement de plusieurs machines en présence de contraintes sur les ressources matérielles dans une usine d’extrusion plastique. La minimisation de la somme pondérée des retards est le critère économique autour duquel s’articule cette étude car il représente un critère très important pour le respect des délais. Dans ce mémoire, nous proposons une approche exacte via une formulation mathématique capable des donner des solutions optimales et une approche heuristique qui repose sur deux méthodes de construction de solution sérielle et parallèle et un ensemble de méthodes de recherche dans le voisinage (recuit-simulé, recherche avec tabous, GRASP et algorithme génétique) avec cinq variantes de voisinages. Pour être en totale conformité avec la réalité de l’industrie du plastique, nous avons pris en considération certaines caractéristiques très fréquentes telles que les temps de changement d’outils sur les machines lorsqu’un ordre de fabrication succède à un autre sur une machine donnée. La disponibilité des extrudeuses et des matrices d’extrusion représente le goulot d’étranglement dans ce problème d’ordonnancement. Des séries d’expérimentations basées sur des problèmes tests ont été effectuées pour évaluer la qualité de la solution obtenue avec les différents algorithmes proposés. L’analyse des résultats a démontré que les méthodes de construction de solution ne sont pas suffisantes pour assurer de bons résultats et que les méthodes de recherche dans le voisinage donnent des solutions de très bonne qualité. Le choix du voisinage est important pour raffiner la qualité de la solution obtenue. Mots-clés : ordonnancement, optimisation, extrusion, formulation mathématique, heuristique, recuit-simulé, recherche avec tabous, GRASP, algorithme génétique
Resumo:
This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.
Resumo:
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.
Resumo:
Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less magical, and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature real-world problem relating to the lengths of paths in a network.
Resumo:
A manufacturing system has a natural dynamic nature observed through several kinds of random occurrences and perturbations on working conditions and requirements over time. For this kind of environment it is important the ability to efficient and effectively adapt, on a continuous basis, existing schedules according to the referred disturbances, keeping performance levels. The application of Meta-Heuristics and Multi-Agent Systems to the resolution of this class of real world scheduling problems seems really promising. This paper presents a prototype for MASDScheGATS (Multi-Agent System for Distributed Manufacturing Scheduling with Genetic Algorithms and Tabu Search).