25 resultados para Production lot-scheduling models
Resumo:
Methane-rich landfill gas is generated when biodegradable organic wastes disposed of in landfills decompose under anaerobic conditions. Methane is a significant greenhouse gas, and landfills are its major source in Finland. Methane production in landfill depends on many factors such as the composition of waste and landfill conditions, and it can vary a lot temporally and spatially. Methane generation from waste can be estimated with various models. In this thesis three spreadsheet applications, a reaction equation and a triangular model for estimating the gas generation were introduced. The spreadsheet models introduced are IPCC Waste Model (2006), Metaanilaskentamalli by Jouko Petäjä of Finnish Environment Institute and LandGEM (3.02) of U.S. Environmental Protection Agency. All these are based on the first order decay (FOD) method. Gas recovery methods and gas emission measurements were also examined. Vertical wells and horizontal trenches are the most commonly used gas collection systems. Emission measurements chamber method, tracer method, soil core and isotope measurements, micrometeorological mass-balance and eddy covariance methods and gas measuring FID-technology were discussed. Methane production at Ämmässuo landfill of HSY Helsinki Region Environmental Services Authority was estimated with methane generation models and the results were compared with the volumes of collected gas. All spreadsheet models underestimated the methane generation at some point. LandGEM with default parameters and Metaanilaskentamalli with modified parameters corresponded best with the gas recovery numbers. Reason for the differences between evaluated and collected volumes could be e.g. that the parameter values of the degradable organic carbon (DOC) and the fraction of decomposable degradable organic carbon (DOCf) do not represent the real values well enough. Notable uncertainty is associated with the modelling results and model parameters. However, no simple explanation for the discovered differences can be given within this thesis.
Resumo:
Electricity distribution network operation (NO) models are challenged as they are expected to continue to undergo changes during the coming decades in the fairly developed and regulated Nordic electricity market. Network asset managers are to adapt to competitive technoeconomical business models regarding the operation of increasingly intelligent distribution networks. Factors driving the changes for new business models within network operation include: increased investments in distributed automation (DA), regulative frameworks for annual profit limits and quality through outage cost, increasing end-customer demands, climatic changes and increasing use of data system tools, such as Distribution Management System (DMS). The doctoral thesis addresses the questions a) whether there exist conditions and qualifications for competitive markets within electricity distribution network operation and b) if so, identification of limitations and required business mechanisms. This doctoral thesis aims to provide an analytical business framework, primarily for electric utilities, for evaluation and development purposes of dedicated network operation models to meet future market dynamics within network operation. In the thesis, the generic build-up of a business model has been addressed through the use of the strategicbusiness hierarchy levels of mission, vision and strategy for definition of the strategic direction of the business followed by the planning, management and process execution levels of enterprisestrategy execution. Research questions within electricity distribution network operation are addressed at the specified hierarchy levels. The results of the research represent interdisciplinary findings in the areas of electrical engineering and production economics. The main scientific contributions include further development of the extended transaction cost economics (TCE) for government decisions within electricity networks and validation of the usability of the methodology for the electricity distribution industry. Moreover, DMS benefit evaluations in the thesis based on the outage cost calculations propose theoretical maximum benefits of DMS applications equalling roughly 25% of the annual outage costs and 10% of the respective operative costs in the case electric utility. Hence, the annual measurable theoretical benefits from the use of DMS applications are considerable. The theoretical results in the thesis are generally validated by surveys and questionnaires.
Researching Manufacturing Planning and Control system and Master Scheduling in a manufacturing firm.
Resumo:
The objective of this thesis is to research Manufacturing Planning and Control (MPC) system and Master Scheduling (MS) in a manufacturing firm. The study is conducted at Ensto Finland Corporation, which operates on a field of electrical systems and supplies. The paper consists of theoretical and empirical parts. The empirical part is based on weekly operating at Ensto and includes inter-firm material analysis, learning and meetings. Master Scheduling is an important module of an MPC system, since it is beneficial on transforming strategic production plans based on demand forecasting into operational schedules. Furthermore, capacity planning tools can remarkably contribute to production planning: by Rough-Cut Capacity Planning (RCCP) tool, a MS plan can be critically analyzed in terms of available key resources in real manufacturing environment. Currently, there are remarkable inefficiencies when it comes to Ensto’s practices: the system is not able to take into consideration seasonal demand and react on market changes on time; This can cause significant lost sales. However, these inefficiencies could be eliminated through the appropriate utilization of MS and RCCP tools. To utilize MS and RCCP tools in Ensto’s production environment, further testing in real production environment is required. Moreover, data accuracy, appropriate commitment to adapting and learning the new tools, and continuous developing of functions closely related to MS, such as sales forecasting, need to be ensured.
Resumo:
In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.
Resumo:
The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.
Resumo:
In a just-in-time, assemble-to-order production environments the scheduling of material requirements and production tasks - even though difficult - is of paramount importance. Different enterprise resource planning solutions with master scheduling functionality have been created to ease this problem and work as expected unless there is a problem in the material flow. This case-based candidate’s thesis introduces a tool for Microsoft Dynamics AX multisite environment, that can be used by site managers and production coordinators to get an overview of the current open sales order base and prioritize production in the event of material shortouts to avoid part-deliveries.
Resumo:
Third party logistics, and third party logistics providers and the services they offer have grown substantially in the last twenty years. Even though there has been extensive research on third party logistics providers, and regular industry reviews within the logistics industry, a closer research in the area of partner selection and network models in the third party logistics industry is missing. The perspective taken in this study was of expanding the network research into logistics service providers as the focal firm in the network. The purpose of the study is to analyze partnerships and networks in the third party logistics industry in order to define how networks are utilized in third party logistics markets, what have been the reasons for the partnerships, and whether there are benefits for the third party logistics provider that can be achieved through building networks and partnerships. The theoretical framework of this study was formed based on common theories in studying networks and partnerships in accordance with models of horizontal and vertical partnerships. The theories applied to the framework and context of this study included the strategic network view and the resource-based view. Applying these two network theories to the position and networks of third party logistics providers in an industrial supply chain, a theoretical model for analyzing the horizontal and vertical partnerships where the TPL provider is in focus was structured. The empirical analysis of TPL partnerships consisted of a qualitative document analysis of 33 partnership examples involving companies present in the Finnish TPL markets. For the research, existing documents providing secondary data on types of partnerships, reasons for the partnerships, and outcomes of the partnerships were searched from available online sources. Findings of the study revealed that third party logistics providers are evident in horizontal and vertical interactions varying in geographical coverage and the depth and nature of the relationship. Partnership decisions were found to be made on resource based reasons, as well as from strategic aspects. The discovered results of the partnerships in this study included cost reduction and effectiveness in the partnerships for improving existing services. In addition in partnerships created for innovative service extension, differentiation, and creation of additional value were discovered to have emerged as results of the cooperation. It can be concluded that benefits and competitive advantage can be created through building partnerships in order to expand service offering and seeking synergies.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
A business model is a structure frame of an organization that can bring significant benefits and competitive advantage when structured properly. The aim of this paper was to observe and describe development of business models’ and identify factors and elements of a business model that are in a key role from the perspective of an organizational sustainability. One is striving to bring out in this thesis how should truly sustainable business model look like and what are main characteristics of it. Additionally, some recommendations that could be helpful in order to build sustainable and balanced business model in a company are presented in this work. The meaning was to make theoretical and in some extent practical acquaintance with such new business models as open business model and sustainable business model. Long-term sustainability achievement in a company was in a centric role and used as a main criteria when constructing sustainable business model structure. The main research question in this study aims to answer: What a firm should consider in order to develop profitable and sustainable business model? This study is qualitative in nature and it was conducted using content analyze as a main method of this research. The perspective of the target data in this study is an outlook of its producers of how sustainability is reached in an organization throw business model and which practices are important and has to be taken into account. The material was gathered mainly from secondary sources and the theoretical framework was outright built based on secondary data. The secondary data that have been mostly dissertations, academic writings, cases, academic journals and academic books have been analyzed from the point of view of sustainability perspective. As a result it became evident that a structure of a business model and its implementation along with a strategy is often what leads companies to success. However, for the most part, overall business environment decides and delimits how the most optimal business model should be constructed in order to be effective and sustainable. The evaluation of key factors and elements in business model leading organization to sustainability should be examined throw triple bottom line perspective, where key dimensions are environmental, social and economic. It was concluded that dimensions should be evaluated as equal in order to attain total long lasting sustainability, contradicting traditional perspective in business where profit production is seen as only main goal of a business.
Resumo:
The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.