45 resultados para MIP Mathematical Programming Job Shop Scheduling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksessa on selvitetty ohutlevyn taivuttamismenetelmien tärkeimmät kustannustekijät ja menetelmien taloudelliset käyttöalueet. Vertailtavina menetelminä on käsinsärmäys, robotisoitu särmäys, taivutusautomaatti ja taivutuskone. Tulosta on sovellettu Hackman Metos Oy:n keittiölaitteiden tuotantoon. Tutkimusmenetelminä oli haastattelututkimus, kirjallisuustutkimus, työntutkimustulosten käyttö, ryhmäteknologian soveltaminen ja kokeellinen tutkimus. Särmäysrobotin tärkein kustannustekijä on ohjelmointiaika, mikä vaikuttaa ratkaisevasti sen soveltuvuuteen pienerätuotantoon. Nykyisten särmäyssolujen taloudellinen käyttöalue on tuhansien kappaleiden vuosivolyymi satojen kappaleiden eräkoolla. taivutusautomaatin ohjelmointi- ja asetusajat ovat erittäin lyhyet ja sen tärkein kustannustekijä on käyttöaste. Mikäli käyttöaste on korkea, taivutusautomaatti on kannattava pienerätuotannossa pienille vuosivolyymeille. Taivutusautomaatin käyttöönotossa tuotteiden suunnittelu on tärkeä tekijä, sillä särmättäväksi suunnitellut osat eivät välttämättä sovellu taivutusautomaatilla taivutettavaksi. Taivutuskoneen investointikustannus on alhaisempi kuin särmäyspuristimen, mutta sillä on paljon tuotteen valmistettavuuden liittyviä rajoituksia. Taivutuskone on kannattava investointi, mikäli tuotannossa on paljon levyjä, joiden taivutukset ovat samaan suuntaan ja ne vaativat kaksi särmääjää. Tutkimuksen perusteella Hackman Metso Oy:ssä teknis-taloudellisin taivutusmenetelmä on käsinsärmäys. Tuotannon kasvaessa taivutusautomaatti tulee olemaan särmäysrobottia edullisempi. Taivutuskoneella on niin paljon valmistettavuusrajoituksia, että se ei sovellu yrityksen tuotantoon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tehdyssä kirjallisuus- ja teoriakatsauksessa vuosien 2006 - 2010 välisenä aikana, Keski-Suomessa toimivan konepajateollisuuden järjestelmätoimittajayrityksen toimeksiannosta, pyrittiin muodostamaan kokonaiskuva laajasta tuotannonsuunnittelun ja -ohjauksen aihealueesta. Perustutkimuskysymykset liittyivät ns. MPC-systeemiin, jolla tarkoitetaan sitä, että tuotannonsuunnittelu- ja ohjauskysymyksissä on huomioitava aina henkilöiden, organisaation, teknologioiden ja prosessien muodostama kokonaisuus. Operatiivisen johtamisen tehtävänä on yrityksen tuotteita koskevan kysynnän ja tarjonnan tasapainottaminen niin, että resursseja käytettäisiin ja tarvittaisiin mahdollisimman vähän vastattaessa kysyntään asiakasvaatimukset huomioiden. Tuotantostrategian pohjalta on voitava rakentaa MPC-systeemi, jonka avulla ja jota kehittäen tuotanto saavuttaisi sille asetetut suorituskykytavoitteet mm. kustannusten, laadun, nopeuden, luotettavuuden sekä tuottavuuskehityksen osalta. Työssä tarkasteltiin yleisen kolmitasoisen viitekehyksen kautta ”perinteisistä MPC-systeemien perusratkaisuista” hierarkkisia, suunnittelu- ja laskentaintensiiviä, MRP-pohjaisia sekä yksinkertaistamiseen ja nopeuteen perustuvia JIT/Lean -menetelmiä. Tämä viitekehys käsittää: 1) kysynnän- ja resurssien hallinnan, 2) yksityiskohtaisemman kapasiteetin ja materiaalien hallinnan sekä 3) tarkemman tuotannon ja hankintojen ohjauksen sekä tuotannon lattiatason osa-alueet. Johtamisen ja MPC-systeemien kehittämisen ”uusina aaltoina ja näkökulmina” raportissa käsiteltiin myös johtamisen eri koulukuntia sekä em. viitekehyksen pohjalta tarvittavia tietojärjestelmiä. Olennaisimpana johtopäätöksenä todettiin, että MRP-pohjaisten ratkaisujen lisäksi, etenkin monimutkaisia tuotteita tilausohjautuvasti valmistavien kappaletavarateollisuuden yritysten, on mahdollisesti hyödynnettävä myös kehittyneempiä suunnittelu- ja ohjausjärjestelmiä. Lisäksi huomattiin, että ”perinteisten strategioiden” rinnalle yritysten on nostettava myös tieto- ja viestintäteknologiastrategiat. On tärkeää ymmärtää, että täydellistä MPC-systeemiä ei ole vielä keksitty: jokaisen yrityksen tehtäväksi ja vastuulle jää ”oman totuutensa” muodostaminen ja systeeminsä rakentaminen sen pohjalta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tarkoitus on kehittää pk-konepajayrityksen tuotannonohjausta ja toteuttaa tuotannonohjausjärjestelmän hallittu käyttöönotto. Tavoitteena on parantaa yrityksen kapasiteetin hallintaa ja toimitusaikapitävyyttä sekä kehittää tuotannon päivittäisohjausta ja koordinointia. Työssä on sovellettu prosessijohtamisen kuvausmenetelmiä ja prosessin kehittämistyökaluja tuotannonohjausprosessin kehittämiseen ja tuotannonohjausjärjestelmän käyttöönottoon. Lisäksi työssä on lähdekirjallisuuden avulla tutkittu eri tuotannonohjausperiaatteiden soveltuvuutta asiakasohjautuvaan joustavaan konepajatuotantoon. Työ on toteutettu kvalitatiivisena toimintatutkimuksena. Tuotannonohjausjärjestelmän käyttöönoton avulla on mahdollista kehittää tuotannon kapasiteetin ohjausta ja tuotannonkoordinaatiota. Tämä kuitenkin edellyttää tuotannonohjausprosessin kuvaamista sekä ohjaukseen osallistuvien henkilöiden roolien ja vastuiden selkeää määrittelyä. Erityisen kriittistä on saada koko organisaatio suhtautumaan muutokseen positiivisesti.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work mathematical programming models for structural and operational optimisation of energy systems are developed and applied to a selection of energy technology problems. The studied cases are taken from industrial processes and from large regional energy distribution systems. The models are based on Mixed Integer Linear Programming (MILP), Mixed Integer Non-Linear Programming (MINLP) and on a hybrid approach of a combination of Non-Linear Programming (NLP) and Genetic Algorithms (GA). The optimisation of the structure and operation of energy systems in urban regions is treated in the work. Firstly, distributed energy systems (DES) with different energy conversion units and annual variations of consumer heating and electricity demands are considered. Secondly, district cooling systems (DCS) with cooling demands for a large number of consumers are studied, with respect to a long term planning perspective regarding to given predictions of the consumer cooling demand development in a region. The work comprises also the development of applications for heat recovery systems (HRS), where paper machine dryer section HRS is taken as an illustrative example. The heat sources in these systems are moist air streams. Models are developed for different types of equipment price functions. The approach is based on partitioning of the overall temperature range of the system into a number of temperature intervals in order to take into account the strong nonlinearities due to condensation in the heat recovery exchangers. The influence of parameter variations on the solutions of heat recovery systems is analysed firstly by varying cost factors and secondly by varying process parameters. Point-optimal solutions by a fixed parameter approach are compared to robust solutions with given parameter variation ranges. In the work enhanced utilisation of excess heat in heat recovery systems with impingement drying, electricity generation with low grade excess heat and the use of absorption heat transformers to elevate a stream temperature above the excess heat temperature are also studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selostus: Lannoituksen pitkäaikaiset kenttäkokeet: kolmen matemaattisen mallin vertailu

Relevância:

20.00% 20.00%

Publicador: