964 resultados para Job Shop, Train Scheduling, Meta-Heuristics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente lavoro ha come obiettivo la definizione e la misura della complessità tecnologica, al fine di costruire strumenti a supporto di tutti gli operatori che si occupano dello sviluppo e della fabbricazione di un prodotto industriale, quali progettisti di prodotto e responsabili di produzione. La ricerca è stata sviluppata attraverso le fasi di seguito descritte. Analisi dello stato dell’arte su definizioni e misure della complessità in ambito industriale attraverso l’individuazione e studio di oltre un centinaio di pubblicazioni al riguardo. Classificazione dei metodi proposti in letteratura per la misura della complessità in cinque categorie e analisi critica dei punti di forza e di debolezza dei differenti metodi, ai fini di orientare la elaborazione di un nuovo metodo. Sono stati inoltre analizzati i principali metodi di Intelligenza Artificiali quali potenziali strumenti di calcolo della complessità. Indagine su tematiche correlate alla complessità quali indicatori, trasferimento tecnologico e innovazione. La complessità viene misurata in termini di un indice che appartiene alla categoria degli indicatori, utilizzati in molti ambiti industriali, in particolare quello della misura delle prestazioni di produzione. In particolare si è approfondito significato e utilizzo dell’OEE (Overall Equipment Effectiveness), particolarmente diffuso nelle piccole medie imprese emilianoromagnole e in generale dalle aziende che utilizzano un sistema produttivo di tipo job-shop. È stato implementato un efficace sistema di calcolo dell’OEE presso una azienda meccanica locale. L’indice di complessità trova una delle sue più interessanti applicazioni nelle operazioni di trasferimento tecnologico. Introdurre un’innovazione significa in genere aumentare la complessità del sistema, quindi i due concetti sono connessi. Sono stati esaminati diversi casi aziendali di trasferimento di tecnologia e di misura delle prestazioni produttive, evidenziando legami e influenza della complessità tecnologica sulle scelte delle imprese. Elaborazione di un nuovo metodo di calcolo di un indice di complessità tecnologica di prodotto, a partire dalla metodologia ibrida basata su modello entropico proposta dai Prof. ElMaraghy e Urbanic nel 2003. L’attenzione è stata focalizzata sulla sostituzione nella formula originale a valori determinati tramite interviste agli operatori e pertanto soggettivi, valori oggettivi. Verifica sperimentale della validità della nuova metodologia attraverso l’applicazione della formula ad alcuni componenti meccanici grazie alla collaborazione di un’azienda meccanica manifatturiera. Considerazioni e conclusioni sui risultati ottenuti, sulla metodologia proposta e sulle applicazioni del nuovo indice, delineando gli obiettivi del proseguo della ricerca. In tutto il lavoro si sono evidenziate connessioni e convergenze delle diverse fonti e individuati in diversi ambiti concetti e teorie che forniscono importanti spunti e considerazioni sul tema della complessità. Particolare attenzione è stata dedicata all’intera bibliografia dei Prof. ElMaraghy al momento riconosciuti a livello internazionale come i più autorevoli studiosi del tema della complessità in ambito industriale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dieser Beitrag zeigt die Anwendung des Ant-Colony-System (ACS) Algorithmus auf die Sequenzierung von Querverteil-Wagen in einem Lager. Wir erweitern den Basisalgorithmus der Ant-Colony-Optimierung (ACO) für die Minimierung der Bearbeitungszeit einer Menge von Fahraufträgen für die Querverteil-Wagen. Im Vergleich zu dem Greedy-Algorithmus ist der ACO-Algorithmus wettbewerbsfähig und schnell. In vielen Lagerverwaltungssystemen werden die Fahraufträge nach dem FIFO-Prinzip (First-in-First-out) ausgeführt. In diesem Beitrag wird der ACO-Algorithmus genutzt, um eine optimale Sequenz der Fahraufträge zu bilden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con el surgir de los problemas irresolubles de forma eficiente en tiempo polinomial en base al dato de entrada, surge la Computación Natural como alternativa a la computación clásica. En esta disciplina se trata de o bien utilizar la naturaleza como base de cómputo o bien, simular su comportamiento para obtener mejores soluciones a los problemas que los encontrados por la computación clásica. Dentro de la computación natural, y como una representación a nivel celular, surge la Computación con Membranas. La primera abstracción de las membranas que se encuentran en las células, da como resultado los P sistemas de transición. Estos sistemas, que podrían ser implementados en medios biológicos o electrónicos, son la base de estudio de esta Tesis. En primer lugar, se estudian las implementaciones que se han realizado, con el fin de centrarse en las implementaciones distribuidas, que son las que pueden aprovechar las características intrínsecas de paralelismo y no determinismo. Tras un correcto estudio del estado actual de las distintas etapas que engloban a la evolución del sistema, se concluye con que las distribuciones que buscan un equilibrio entre las dos etapas (aplicación y comunicación), son las que mejores resultados presentan. Para definir estas distribuciones, es necesario definir completamente el sistema, y cada una de las partes que influyen en su transición. Además de los trabajos de otros investigadores, y junto a ellos, se realizan variaciones a los proxies y arquitecturas de distribución, para tener completamente definidos el comportamiento dinámico de los P sistemas. A partir del conocimiento estático –configuración inicial– del P sistema, se pueden realizar distribuciones de membranas en los procesadores de un clúster para obtener buenos tiempos de evolución, con el fin de que la computación del P sistema sea realizada en el menor tiempo posible. Para realizar estas distribuciones, hay que tener presente las arquitecturas –o forma de conexión– de los procesadores del clúster. La existencia de 4 arquitecturas, hace que el proceso de distribución sea dependiente de la arquitectura a utilizar, y por tanto, aunque con significativas semejanzas, los algoritmos de distribución deben ser realizados también 4 veces. Aunque los propulsores de las arquitecturas han estudiado el tiempo óptimo de cada arquitectura, la inexistencia de distribuciones para estas arquitecturas ha llevado a que en esta Tesis se probaran las 4, hasta que sea posible determinar que en la práctica, ocurre lo mismo que en los estudios teóricos. Para realizar la distribución, no existe ningún algoritmo determinista que consiga una distribución que satisfaga las necesidades de la arquitectura para cualquier P sistema. Por ello, debido a la complejidad de dicho problema, se propone el uso de metaheurísticas de Computación Natural. En primer lugar, se propone utilizar Algoritmos Genéticos, ya que es posible realizar alguna distribución, y basada en la premisa de que con la evolución, los individuos mejoran, con la evolución de dichos algoritmos, las distribuciones también mejorarán obteniéndose tiempos cercanos al óptimo teórico. Para las arquitecturas que preservan la topología arbórea del P sistema, han sido necesarias realizar nuevas representaciones, y nuevos algoritmos de cruzamiento y mutación. A partir de un estudio más detallado de las membranas y las comunicaciones entre procesadores, se ha comprobado que los tiempos totales que se han utilizado para la distribución pueden ser mejorados e individualizados para cada membrana. Así, se han probado los mismos algoritmos, obteniendo otras distribuciones que mejoran los tiempos. De igual forma, se han planteado el uso de Optimización por Enjambres de Partículas y Evolución Gramatical con reescritura de gramáticas (variante de Evolución Gramatical que se presenta en esta Tesis), para resolver el mismo cometido, obteniendo otro tipo de distribuciones, y pudiendo realizar una comparativa de las arquitecturas. Por último, el uso de estimadores para el tiempo de aplicación y comunicación, y las variaciones en la topología de árbol de membranas que pueden producirse de forma no determinista con la evolución del P sistema, hace que se deba de monitorizar el mismo, y en caso necesario, realizar redistribuciones de membranas en procesadores, para seguir obteniendo tiempos de evolución razonables. Se explica, cómo, cuándo y dónde se deben realizar estas modificaciones y redistribuciones; y cómo es posible realizar este recálculo. Abstract Natural Computing is becoming a useful alternative to classical computational models since it its able to solve, in an efficient way, hard problems in polynomial time. This discipline is based on biological behaviour of living organisms, using nature as a basis of computation or simulating nature behaviour to obtain better solutions to problems solved by the classical computational models. Membrane Computing is a sub discipline of Natural Computing in which only the cellular representation and behaviour of nature is taken into account. Transition P Systems are the first abstract representation of membranes belonging to cells. These systems, which can be implemented in biological organisms or in electronic devices, are the main topic studied in this thesis. Implementations developed in this field so far have been studied, just to focus on distributed implementations. Such distributions are really important since they can exploit the intrinsic parallelism and non-determinism behaviour of living cells, only membranes in this case study. After a detailed survey of the current state of the art of membranes evolution and proposed algorithms, this work concludes that best results are obtained using an equal assignment of communication and rules application inside the Transition P System architecture. In order to define such optimal distribution, it is necessary to fully define the system, and each one of the elements that influence in its transition. Some changes have been made in the work of other authors: load distribution architectures, proxies definition, etc., in order to completely define the dynamic behaviour of the Transition P System. Starting from the static representation –initial configuration– of the Transition P System, distributions of membranes in several physical processors of a cluster is algorithmically done in order to get a better performance of evolution so that the computational complexity of the Transition P System is done in less time as possible. To build these distributions, the cluster architecture –or connection links– must be considered. The existence of 4 architectures, makes that the process of distribution depends on the chosen architecture, and therefore, although with significant similarities, the distribution algorithms must be implemented 4 times. Authors who proposed such architectures have studied the optimal time of each one. The non existence of membrane distributions for these architectures has led us to implement a dynamic distribution for the 4. Simulations performed in this work fix with the theoretical studies. There is not any deterministic algorithm that gets a distribution that meets the needs of the architecture for any Transition P System. Therefore, due to the complexity of the problem, the use of meta-heuristics of Natural Computing is proposed. First, Genetic Algorithm heuristic is proposed since it is possible to make a distribution based on the premise that along with evolution the individuals improve, and with the improvement of these individuals, also distributions enhance, obtaining complexity times close to theoretical optimum time. For architectures that preserve the tree topology of the Transition P System, it has been necessary to make new representations of individuals and new algorithms of crossover and mutation operations. From a more detailed study of the membranes and the communications among processors, it has been proof that the total time used for the distribution can be improved and individualized for each membrane. Thus, the same algorithms have been tested, obtaining other distributions that improve the complexity time. In the same way, using Particle Swarm Optimization and Grammatical Evolution by rewriting grammars (Grammatical Evolution variant presented in this thesis), to solve the same distribution task. New types of distributions have been obtained, and a comparison of such genetic and particle architectures has been done. Finally, the use of estimators for the time of rules application and communication, and variations in tree topology of membranes that can occur in a non-deterministic way with evolution of the Transition P System, has been done to monitor the system, and if necessary, perform a membrane redistribution on processors to obtain reasonable evolution time. How, when and where to make these changes and redistributions, and how it can perform this recalculation, is explained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to its wide applicability and ease of use, the analytic hierarchy process (AHP) has been studied extensively for the last 20 years. Recently, it is observed that the focus has been confined to the applications of the integrated AHPs rather than the stand-alone AHP. The five tools that commonly combined with the AHP include mathematical programming, quality function deployment (QFD), meta-heuristics, SWOT analysis, and data envelopment analysis (DEA). This paper reviews the literature of the applications of the integrated AHPs. Related articles appearing in the international journals from 1997 to 2006 are gathered and analyzed so that the following three questions can be answered: (i) which type of the integrated AHPs was paid most attention to? (ii) which area the integrated AHPs were prevalently applied to? (iii) is there any inadequacy of the approaches? Based on the inadequacy, if any, some improvements and possible future work are recommended. This research not only provides evidence that the integrated AHPs are better than the stand-alone AHP, but also aids the researchers and decision makers in applying the integrated AHPs effectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis reviews the existing manufacturing control techniques and identifies their practical drawbacks when applied in a high variety, low and medium volume environment. It advocates that the significant drawbacks inherent in such systems, could impair their applications under such manufacturing environment. The key weaknesses identified in the system were: capacity insensitive nature of Material Requirements Planning (MRP); the centralised approach to planning and control applied in Manufacturing Resources Planning (MRP IT); the fact that Kanban can only be used in repetitive environments; Optimised Productivity Techniques's (OPT) inability to deal with transient bottlenecks, etc. On the other hand, cellular systems offer advantages in simplifying the control problems of manufacturing and the thesis reviews systems designed for cellular manufacturing including Distributed Manufacturing Resources Planning (DMRP) and Flexible Manufacturing System (FMS) controllers. It advocates that a newly developed cellular manufacturing control methodology, which is fully automatic, capacity sensitive and responsive, has the potential to resolve the core manufacturing control problems discussed above. It's development is envisaged within the framework of a DMRP environment, in which each cell is provided with its own MRP II system and decision making capability. It is a cellular based closed loop control system, which revolves on single level Bill-Of-Materials (BOM) structure and hence provides better linkage between shop level scheduling activities and relevant entries in the MPS. This provides a better prospect of undertaking rapid response to changes in the status of manufacturing resources and incoming enquiries. Moreover, it also permits automatic evaluation of capacity and due date constraints and hence facilitates the automation of MPS within such system. A prototype cellular manufacturing control model, was developed to demonstrate the underlying principles and operational logic of the cellular manufacturing control methodology, based on the above concept. This was shown to offer significant advantages from the prospective of operational planning and control. Results of relevant tests proved that the model is capable of producing reasonable due date and undertake automation of MPS. The overall performance of the model proved satisfactory and acceptable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes an investigation by the author into the spares operation of compare BroomWade Ltd. Whilst the complete system, including the warehousing and distribution functions, was investigated, the thesis concentrates on the provisioning aspect of the spares supply problem. Analysis of the historical data showed the presence of significant fluctuations in all the measures of system performance. Two Industrial Dynamics simulation models were developed to study this phenomena. The models showed that any fluctuation in end customer demand would be amplified as it passed through the distributor and warehouse stock control systems. The evidence from the historical data available supported this view of the system's operation. The models were utilised to determine which parts of the total system could be expected to exert a critical influence on its performance. The lead time parameters of the supply sector were found to be critical and further study showed that the manner in which the lead time changed with work in progress levels was also an important factor. The problem therefore resolved into the design of a spares manufacturing system. Which exhibited the appropriate dynamic performance characteristics. The gross level of entity presentation, inherent in the Industrial Dynamics methodology, was found to limit the value of these models in the development of detail design proposals. Accordingly, an interacting job shop simulation package was developed to allow detailed evaluation of organisational factors on the performance characteristics of a manufacturing system. The package was used to develop a design for a pilot spares production unit. The need for a manufacturing system to perform successfully under conditions of fluctuating demand is not limited to the spares field. Thus, although the spares exercise provides an example of the approach, the concepts and techniques developed can be considered to have broad application throughout batch manufacturing industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho propõe demonstrar como o sistema PHC Manufactor se adequa à empresa em estudo, Ciclo Fapril, apresentando as opções de planeamento que este oferece, as dificuldades com que a empresa se irá deparar e, quando possível, o que fazer para ultrapassar as adversidades colocadas pelo sistema. Numa segunda parte são estudadas algumas heurísticas, nomeadamente FIFO, Tempo de Processamento, EDD, MOR e LOR, para se perceber qual a que melhor se adapta à empresa, de forma a poder cumprir com os prazos acordados. Posteriormente utilizou-se a heurística com melhores resultados e fez-se algumas alterações aos tempos de processamento dos centros de trabalho para melhorar a sua capacidade de resposta aos pedidos. No Final deste estudo percebeuse que o planeamento por EDD era o que melhor se adaptava a empresa. Percebeu-se ainda que os centros de trabalho AS e AT são os que têm menor produtividade e por este motivo se deveria aumentar a sua produtividade, de forma a aumentar a produtividade global.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 4: Transition Towards Product-Service Systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mest. em Engenharia de Sistemas e Computação - Área de Sistemas de Controlo, Faculdade de Ciências e Tecnologia, Univ.do Algarve, 2001

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Minimizing the makespan of a flow-shop no-wait (FSNW) schedule where the processing times are randomly distributed is an important NP-Complete Combinatorial Optimization Problem. In spite of this, it can be found only in very few papers in the literature. By considering the Start Interval Concept, this problem can be formulated, in a practical way, in function of the probability of the success in preserve FSNW constraints for all tasks execution. With this formulation, for the particular case with 3 machines, this paper presents different heuristics solutions: by integrating local optimization steps with insertion procedures and by using genetic algorithms for search the solution space. Computational results and performance evaluations are commented. Copyright (C) 1998 IFAC.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Non-preemptive two-machine flow-shop scheduling problem with uncertain processing times of n jobs is studied. In an uncertain version of a scheduling problem, there may not exist a unique schedule that remains optimal for all possible realizations of the job processing times. We find necessary and sufficient conditions (Theorem 1) when there exists a dominant permutation that is optimal for all possible realizations of the job processing times. Our computational studies show the percentage of the problems solvable under these conditions for the cases of randomly generated instances with n ≤ 100 . We also show how to use additional information about the processing times of the completed jobs during optimal realization of a schedule (Theorems 2 – 4). Computational studies for randomly generated instances with n ≤ 50 show the percentage of the two- machine flow-shop scheduling problems solvable under the sufficient conditions given in Theorems 2 – 4.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.