990 resultados para Heuristic Methods


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The present paper solves the multi-level capacitated lot sizing problem with backlogging (MLCLSPB) combining a genetic algorithm with the solution of mixed-integer programming models and the improvement heuristic fix and optimize. This approach is evaluated over sets of benchmark instances and compared to methods from literature. Computational results indicate competitive results applying the proposed method when compared with other literature approaches. © 2013 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a new hyper-heuristic method using Case-Based Reasoning (CBR) for solving course timetabling problems. The term Hyper-heuristics has recently been employed to refer to 'heuristics that choose heuristics' rather than heuristics that operate directly on given problems. One of the overriding motivations of hyper-heuristic methods is the attempt to develop techniques that can operate with greater generality than is currently possible. The basic idea behind this is that we maintain a case base of information about the most successful heuristics for a range of previous timetabling problems to predict the best heuristic for the new problem in hand using the previous knowledge. Knowledge discovery techniques are used to carry out the training on the CBR system to improve the system performance on the prediction. Initial results presented in this paper are good and we conclude by discussing the con-siderable promise for future work in this area.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Kidney Exchange Problem (KEP) is an optimisation problem that was first discussed in Rapaport (1986) but has only more recently been the subject of much work by combinatorial optimisation re-searchers. This has been in parallel with its increased prevalence in the medical community. In the basic formulation of a KEP, each instance of the problem features a directed graph D = (V,A) . Each node i ∈ V represents an incompatible pair wherein the patient needs to trade kidneys with the patient of another incompatible pair. The goal is to find an optimal set of cycles such that as many patients as possible receive a transplant. The problem is further complicated by the imposition of a cycle-size constraint, usually considered to be 3 or 4. Kidney exchange programs around the world implement different algorithms to solve the allocation problem by matching up kidneys from potential donors to patients. In some systems all transplants are considered equally desirable, whereas in others, ranking criteria such as the age of the patient or distance they will need to travel are applied, hence the multi-criteria nature of the KEP. To address the multi-criteria aspect of the KEP, in this paper we propose a two-stage approach for the kidney exchange optimisation problem. In the first stage the goal is to find the optimal number of exchanges, and in the second stage the goal is to maximise the weighted sum of the kidney matches, subject to the added constraint that the number of exchanges must remain optimal. The idea can potentially be extended to multiple-objectives, by repeating the process in multiple runs. In our preliminary numerical experiments, we first find the maximum number of kidney matches by using an existing open source exact algorithm of Anderson et al. (2015). The solution will then be used as an initial solution for the stage two optimisation problem, wherein two heuristic methods, steepest ascent and random ascent, are implemented in obtaining good quality solutions to the objective of maximizing total weight of exchanges. The neighbourhood is obtained by two-swaps. It is our intention in the future to implement a varying neighbourhood scheme within the same two heuristic framework, or within other meta-heuristic framework.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Train delay is one of the most important indexes to evaluate the service quality of the railway. Because of the interactions of movement among trains, a delayed train may conflict with trains scheduled on other lines at junction area. Train that loses conflict may be forced to stop or slow down because of restrictive signals, which consequently leads to the loss of run-time and probably enlarges more delays. This paper proposes a time-saving train control method to recover delays as soon as possible. In the proposed method, golden section search is adopted to identify the optimal train speed at the expected time of restrictive signal aspect upgrades, which enables the train to depart from the conflicting area as soon as possible. A heuristic method is then developed to attain the advisory train speed profile assisting drivers in train control. Simulation study indicates that the proposed method enables the train to recover delays as soon as possible in case of disturbances at railway junctions, in comparison with the traditional maximum traction strategy and the green wave strategy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Evolutionary computation is an effective tool for solving optimization problems. However, its significant computational demand has limited its real-time and on-line applications, especially in embedded systems with limited computing resources, e.g., mobile robots. Heuristic methods such as the genetic algorithm (GA) based approaches have been investigated for robot path planning in dynamic environments. However, research on the simulated annealing (SA) algorithm, another popular evolutionary computation algorithm, for dynamic path planning is still limited mainly due to its high computational demand. An enhanced SA approach, which integrates two additional mathematical operators and initial path selection heuristics into the standard SA, is developed in this work for robot path planning in dynamic environments with both static and dynamic obstacles. It improves the computing performance of the standard SA significantly while giving an optimal or near-optimal robot path solution, making its real-time and on-line applications possible. Using the classic and deterministic Dijkstra algorithm as a benchmark, comprehensive case studies are carried out to demonstrate the performance of the enhanced SA and other SA algorithms in various dynamic path planning scenarios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Increasing global competition, rapid technological changes, advances in manufacturing and information technology and discerning customers are forcing supply chains to adopt improvement practices that enable them to deliver high quality products at a lower cost and in a shorter period of time. A lean initiative is one of the most effective approaches toward achieving this goal. In the lean improvement process, it is critical to measure current and desired performance level in order to clearly evaluate the lean implementation efforts. Many attempts have tried to measure supply chain performance incorporating both quantitative and qualitative measures but failed to provide an effective method of measuring improvements in performances for dynamic lean supply chain situations. Therefore, the necessity of appropriate measurement of lean supply chain performance has become imperative. There are many lean tools available for supply chains; however, effectiveness of a lean tool depends on the type of the product and supply chain. One tool may be highly effective for a supply chain involved in high volume products but may not be effective for low volume products. There is currently no systematic methodology available for selecting appropriate lean strategies based on the type of supply chain and market strategy This thesis develops an effective method to measure the performance of supply chain consisting of both quantitative and qualitative metrics and investigates the effects of product types and lean tool selection on the supply chain performance Supply chain performance matrices and the effects of various lean tools over performance metrics mentioned in the SCOR framework have been investigated. A lean supply chain model based on the SCOR metric framework is then developed where non- lean and lean as well as quantitative and qualitative metrics are incorporated in appropriate metrics. The values of appropriate metrics are converted into triangular fuzzy numbers using similarity rules and heuristic methods. Data have been collected from an apparel manufacturing company for multiple supply chain products and then a fuzzy based method is applied to measure the performance improvements in supply chains. Using the fuzzy TOPSIS method, which chooses an optimum alternative to maximise similarities with positive ideal solutions and to minimise similarities with negative ideal solutions, the performances of lean and non- lean supply chain situations for three different apparel products have been evaluated. To address the research questions related to effective performance evaluation method and the effects of lean tools over different types of supply chains; a conceptual framework and two hypotheses are investigated. Empirical results show that implementation of lean tools have significant effects over performance improvements in terms of time, quality and flexibility. Fuzzy TOPSIS based method developed is able to integrate multiple supply chain matrices onto a single performance measure while lean supply chain model incorporates qualitative and quantitative metrics. It can therefore effectively measure the improvements for supply chain after implementing lean tools. It is demonstrated that product types involved in the supply chain and ability to select right lean tools have significant effect on lean supply chain performance. Future study can conduct multiple case studies in different contexts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trocadores de calor são equipamentos muito utilizados na indústria de processos com o objetivo de modificar a temperatura e/ou o estado físico de correntes materiais. Uma rede de trocadores de calor pode ser definida como um grupo de trocadores de calor interligados, a fim de reduzir as necessidades de energia de um sistema. No entanto, durante a operação de uma rede, a eficiência térmica dos trocadores de calor diminui devido à deposição. Esse efeito promove o aumento dos custos de combustível e das emissões de carbono. Uma alternativa para mitigar este problema baseia-se no estabelecimento de uma programação das limpezas dos trocadores de calor durante a operação de rede. Este tipo de abordagem ocasiona uma situação na qual ocorre um conflito de escolha: a limpeza de um trocador de calor pode recuperar a sua eficiência térmica, mas implica custos adicionais, tais como, mão-de-obra, produtos químicos, etc. Além disso, durante a limpeza, o trocador de calor tem de ser contornado por uma corrente de by-pass, o que aumenta temporariamente o consumo de energia. Neste contexto, o presente trabalho tem como objetivo explorar diferentes técnicas de otimização envolvendo métodos estocásticos e heurísticos. Com este objetivo foi desenvolvido um conjunto de códigos computacionais integrados que envolvem a simulação pseudo-estacionária do comportamento da rede relacionado com incrustações e a otimização da programação das limpezas deste tipo de sistema. A solução do problema indica os períodos de tempo para a limpeza de cada trocador de calor. Na abordagem estocástica empregada, os parâmetros do algoritmo genético, como probabilidade de crossover e probabilidade de mutação, foram calibrados para o presente problema. A abordagem heurística desenvolvida se deu através da sequência do conjunto de movimentos zero, um e dois. De forma alternativa, desenvolveu-se a metodologia heurística recursiva na qual os conjuntos de movimentos um e dois foram empregados recursivamente. Também foi desenvolvida a abordagem híbrida que consistiu em diferentes combinações da metodologia estocástica e heurística. A análise comparativa entre as metodologias empregadas teve como objetivo avaliar a abordagem mais adequada para o presente problema da programação das limpezas em termos de função objetivo e esforço computacional. O desempenho da abordagem proposta foi explorado através de uma série de exemplos, incluindo uma refinaria real brasileira. Os resultados foram promissores, indicando que as técnicas de otimização analisadas neste trabalho podem ser abordagens interessantes para operações que envolvam redes de trocadores de calor. Dentre as abordagens de otimização analisadas, a metodologia heurística desenvolvida no presente trabalho apresentou os melhores resultados se mostrando competitiva frente às abordagens comparadas da literatura

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis describes some aspects of a computer system for doing medical diagnosis in the specialized field of kidney disease. Because such a system faces the spectre of combinatorial explosion, this discussion concentrates on heuristics which control the number of concurrent hypotheses and efficient "compiled" representations of medical knowledge. In particular, the differential diagnosis of hematuria (blood in the urine) is discussed in detail. A protocol of a simulated doctor/patient interaction is presented and analyzed to determine the crucial structures and processes involved in the diagnosis procedure. The data structure proposed for representing medical information revolves around elementary hypotheses which are activated when certain disposing of findings, activating hypotheses, evaluating hypotheses locally and combining hypotheses globally is examined for its heuristic implications. The thesis attempts to fit the problem of medical diagnosis into the framework of other Artifcial Intelligence problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, local vs. global knowledge and the structure of hypotheses within the world of kidney disease.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we study a problem of scheduling and batching on two machines in a flow-shop and open-shop environment. Each machine processes operations in batches, and the processing time of a batch is the sum of the processing times of the operations in that batch. A setup time, which depends only on the machine, is required before a batch is processed on a machine, and all jobs in a batch remain at the machine until the entire batch is processed. The aim is to make batching and sequencing decisions, which specify a partition of the jobs into batches on each machine, and a processing order of the batches on each machine, respectively, so that the makespan is minimized. The flow-shop problem is shown to be strongly NP-hard. We demonstrate that there is an optimal solution with the same batches on the two machines; we refer to these as consistent batches. A heuristic is developed that selects the best schedule among several with one, two, or three consistent batches, and is shown to have a worst-case performance ratio of 4/3. For the open-shop, we show that the problem is NP-hard in the ordinary sense. By proving the existence of an optimal solution with one, two or three consistent batches, a close relationship is established with the problem of scheduling two or three identical parallel machines to minimize the makespan. This allows a pseudo-polynomial algorithm to be derived, and various heuristic methods to be suggested.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nesta tese abordam-se várias formulações e diferentes métodos para resolver o Problema da Árvore de Suporte de Custo Mínimo com Restrições de Peso (WMST – Weight-constrained Minimum Spanning Tree Problem). Este problema, com aplicações no desenho de redes de comunicações e telecomunicações, é um problema de Otimização Combinatória NP-difícil. O Problema WMST consiste em determinar, numa rede com custos e pesos associados às arestas, uma árvore de suporte de custo mínimo de tal forma que o seu peso total não exceda um dado limite especificado. Apresentam-se e comparam-se várias formulações para o problema. Uma delas é usada para desenvolver um procedimento com introdução de cortes baseado em separação e que se tornou bastante útil na obtenção de soluções para o problema. Tendo como propósito fortalecer as formulações apresentadas, introduzem-se novas classes de desigualdades válidas que foram adaptadas das conhecidas desigualdades de cobertura, desigualdades de cobertura estendida e desigualdades de cobertura levantada. As novas desigualdades incorporam a informação de dois conjuntos de soluções: o conjunto das árvores de suporte e o conjunto saco-mochila. Apresentam-se diversos algoritmos heurísticos de separação que nos permitem usar as desigualdades válidas propostas de forma eficiente. Com base na decomposição Lagrangeana, apresentam-se e comparam-se algoritmos simples, mas eficientes, que podem ser usados para calcular limites inferiores e superiores para o valor ótimo do WMST. Entre eles encontram-se dois novos algoritmos: um baseado na convexidade da função Lagrangeana e outro que faz uso da inclusão de desigualdades válidas. Com o objetivo de obter soluções aproximadas para o Problema WMST usam-se métodos heurísticos para encontrar uma solução inteira admissível. Os métodos heurísticos apresentados são baseados nas estratégias Feasibility Pump e Local Branching. Apresentam-se resultados computacionais usando todos os métodos apresentados. Os resultados mostram que os diferentes métodos apresentados são bastante eficientes para encontrar soluções para o Problema WMST.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les problèmes de conception de réseaux ont reçu un intérêt particulier et ont été largement étudiés de par leurs nombreuses applications dans différents domaines, tels que les transports et les télécommunications. Nous nous intéressons dans ce mémoire au problème de conception de réseaux avec coûts d’ajout de capacité. Il s’agit d’installer un ensemble d’équipements sur un réseau en vue de satisfaire la demande, tout en respectant les contraintes de capacité, chaque arc pouvant admettre plusieurs équipements. L’objectif est de minimiser les coûts variables de transport des produits et les coûts fixes d’installation ou d’augmentation de capacité des équipements. La méthode que nous envisageons pour résoudre ce problème est basée sur les techniques utilisées en programmation linéaire en nombres entiers, notamment celles de génération de colonnes et de coupes. Ces méthodes sont introduites dans un algorithme général de branch-and-bound basé sur la relaxation linéaire. Nous avons testé notre méthode sur quatre groupes d’instances de tailles différentes, et nous l’avons comparée à CPLEX, qui constitue un des meilleurs solveurs permettant de résoudre des problèmes d’optimisation, ainsi qu’à une méthode existante dans la littérature combinant des méthodes exactes et heuristiques. Notre méthode a été plus performante que ces deux méthodes, notamment pour les instances de très grandes tailles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Radial basis functions can be combined into a network structure that has several advantages over conventional neural network solutions. However, to operate effectively the number and positions of the basis function centres must be carefully selected. Although no rigorous algorithm exists for this purpose, several heuristic methods have been suggested. In this paper a new method is proposed in which radial basis function centres are selected by the mean-tracking clustering algorithm. The mean-tracking algorithm is compared with k means clustering and it is shown that it achieves significantly better results in terms of radial basis function performance. As well as being computationally simpler, the mean-tracking algorithm in general selects better centre positions, thus providing the radial basis functions with better modelling accuracy

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper deals with the classical one-dimensional integer cutting stock problem, which consists of cutting a set of available stock lengths in order to produce smaller ordered items. This process is carried out in order to optimize a given objective function (e.g., minimizing waste). Our study deals with a case in which there are several stock lengths available in limited quantities. Moreover, we have focused on problems of low demand. Some heuristic methods are proposed in order to obtain an integer solution and compared with others. The heuristic methods are empirically analyzed by solving a set of randomly generated instances and a set of instances from the literature. Concerning the latter. most of the optimal solutions of these instances are known, therefore it was possible to compare the solutions. The proposed methods presented very small objective function value gaps. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An important production programming problem arises in paper industries coupling multiple machine scheduling with cutting stocks. Concerning machine scheduling: how can the production of the quantity of large rolls of paper of different types be determined. These rolls are cut to meet demand of items. Scheduling that minimizes setups and production costs may produce rolls which may increase waste in the cutting process. On the other hand, the best number of rolls in the point of view of minimizing waste may lead to high setup costs. In this paper, coupled modeling and heuristic methods are proposed. Computational experiments are presented.