104 resultados para MIP Mathematical Programming Job Shop Scheduling

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plywood manufacture includes two fundamental stages. The first is to peel or separate logs into veneer sheets of different thicknesses. The second is to assemble veneer sheets into finished plywood products. At the first stage a decision must be made as to the number of different veneer thicknesses to be peeled and what these thicknesses should be. At the second stage, choices must be made as to how these veneers will be assembled into final products to meet certain constraints while minimizing wood loss. These decisions present a fundamental management dilemma. Costs of peeling, drying, storage, handling, etc. can be reduced by decreasing the number of veneer thicknesses peeled. However, a reduced set of thickness options may make it infeasible to produce the variety of products demanded by the market or increase wood loss by requiring less efficient selection of thicknesses for assembly. In this paper the joint problem of veneer choice and plywood construction is formulated as a nonlinear integer programming problem. A relatively simple optimal solution procedure is developed that exploits special problem structure. This procedure is examined on data from a British Columbia plywood mill. Restricted to the existing set of veneer thicknesses and plywood designs used by that mill, the procedure generated a solution that reduced wood loss by 79 percent, thereby increasing net revenue by 6.86 percent. Additional experiments were performed that examined the consequences of changing the number of veneer thicknesses used. Extensions are discussed that permit the consideration of more than one wood species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article considers a semi-infinite mathematical programming problem with equilibrium constraints (SIMPEC) defined as a semi-infinite mathematical programming problem with complementarity constraints. We establish necessary and sufficient optimality conditions for the (SIMPEC). We also formulate Wolfe- and Mond-Weir-type dual models for (SIMPEC) and establish weak, strong and strict converse duality theorems for (SIMPEC) and the corresponding dual problems under invexity assumptions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study deals with the irrigation planning of the Cauvery river basin in peninsular India which is extensively developed in the downstream reaches and has a high potential for development in the upper reaches. A four-reservoir system is modelled on a monthly basis by using a mathematical programming (LP) formulation to find optimum cropping patterns, subject to land, water and downstream release constraints, and applied to the Cauvery basin. Two objectives, maximizing net economic benefits and maximizing irrigated cropped area, considered in the model are analysed in the context of multiobjective planning and the trade-offs discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

his paper addresses the problem of minimizing the number of columns with superdiagonal nonzeroes (viz., spiked columns) in a square, nonsingular linear system of equations which is to be solved by Gaussian elimination. The exact focus is on a class of min-spike heuristics in which the rows and columns of the coefficient matrix are first permuted to block lower-triangular form. Subsequently, the number of spiked columns in each irreducible block and their heights above the diagonal are minimized heuristically. We show that ifevery column in an irreducible block has exactly two nonzeroes, i.e., is a doubleton, then there is exactly one spiked column. Further, if there is at least one non-doubleton column, there isalways an optimal permutation of rows and columns under whichnone of the doubleton columns are spiked. An analysis of a few benchmark linear programs suggests that singleton and doubleton columns can abound in practice. Hence, it appears that the results of this paper can be practically useful. In the rest of the paper, we develop a polynomial-time min-spike heuristic based on the above results and on a graph-theoretic interpretation of doubleton columns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a detailed model for reservoir irrigation taking into account the soil moisture dynamics in the root zone of the crops, the data set for reservoir inflow and rainfall in the command will usually be of sufficient length to enable their variations to be described by probability distributions. However, the potential evapotranspiration of the crop itself depends on the characteristics of the crop and the reference evaporation, the quantification of both being associated with a high degree of uncertainty. The main purpose of this paper is to propose a mathematical programming model to determine the annual relative yield of crops and to determine its reliability, for a single reservoir meant for irrigation of multiple crops, incorporating variations in inflow, rainfall in the command area, and crop consumptive use. The inflow to the reservoir and rainfall in the reservoir command area are treated as random variables, whereas potential evapotranspiration is modeled as a fuzzy set. The model's application is illustrated with reference to an existing single-reservoir system in Southern India.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A Batch Processing Machine (BPM) is one which processes a number of jobs simultaneously as a batch with common beginning and ending times. Also, a BPM, once started cannot be interrupted in between (Pre-emption not allowed). This research is motivated by a BPM in steel casting industry. There are three main stages in any steel casting industry viz., pre-casting stage, casting stage and post-casting stage. A quick overview of the entire process, is shown in Figure 1. There are two BPMs : (1) Melting furnace in the pre-casting stage and (2) Heat Treatment Furnace (HTF) in the post casting stage of steel casting manufacturing process. This study focuses on scheduling the latter, namely HTF. Heat-treatment operation is one of the most important stages of steel casting industries. It determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. In general, different types of castings have to undergo more than one type of heat-treatment operations, where the total heat-treatment processing times change. To have a better control, castings are primarily classified into a number of job-families based on the alloy type such as low-alloy castings and high alloy castings. For technical reasons such as type of alloy, temperature level and the expected combination of heat-treatment operations, the castings from different families can not be processed together in the same batch.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we propose a general Linear Programming (LP) based formulation and solution methodology for obtaining optimal solution to the load distribution problem in divisible load scheduling. We exploit the power of the versatile LP formulation to propose algorithms that yield exact solutions to several very general load distribution problems for which either no solutions or only heuristic solutions were available. We consider both star (single-level tree) networks and linear daisy chain networks, having processors equipped with front-ends, that form the generic models for several important network topologies. We consider arbitrary processing node availability or release times and general models for communication delays and computation time that account for constant overheads such as start up times in communication and computation. The optimality of the LP based algorithms is proved rigorously.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present work concerns with the static scheduling of jobs to parallel identical batch processors with incompatible job families for minimizing the total weighted tardiness. This scheduling problem is applicable in burn-in operations and wafer fabrication in semiconductor manufacturing. We decompose the problem into two stages: batch formation and batch scheduling, as in the literature. The Ant Colony Optimization (ACO) based algorithm called ATC-BACO algorithm is developed in which ACO is used to solve the batch scheduling problems. Our computational experimentation shows that the proposed ATC-BACO algorithm performs better than the available best traditional dispatching rule called ATC-BATC rule.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider the problem of minimizing the total completion time on a single batch processing machine. The set of jobs to be scheduled can be partitioned into a number of families, where all jobs in the same family have the same processing time. The machine can process at most B jobs simultaneously as a batch, and the processing time of a batch is equal to the processing time of the longest job in the batch. We analyze that properties of an optimal schedule and develop a dynamic programming algorithm of polynomial time complexity when the number of job families is fixed. The research is motivated by the problem of scheduling burn-in ovens in the semiconductor industry

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we address a scheduling problem for minimising total weighted tardiness. The motivation for the paper comes from the automobile gear manufacturing process. We consider the bottleneck operation of heat treatment stage of gear manufacturing. Real life scenarios like unequal release times, incompatible job families, non-identical job sizes and allowance for job splitting have been considered. A mathematical model taking into account dynamic starting conditions has been developed. Due to the NP-hard nature of the problem, a few heuristic algorithms have been proposed. The performance of the proposed heuristic algorithms is evaluated: (a) in comparison with optimal solution for small size problem instances, and (b) in comparison with `estimated optimal solution' for large size problem instances. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently obtaining near-optimal solutions (that is, statistically estimated one) in very reasonable computational time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In achieving higher instruction level parallelism, software pipelining increases the register pressure in the loop. The usefulness of the generated schedule may be restricted to cases where the register pressure is less than the available number of registers. Spill instructions need to be introduced otherwise. But scheduling these spill instructions in the compact schedule is a difficult task. Several heuristics have been proposed to schedule spill code. These heuristics may generate more spill code than necessary, and scheduling them may necessitate increasing the initiation interval. We model the problem of register allocation with spill code generation and scheduling in software pipelined loops as a 0-1 integer linear program. The formulation minimizes the increase in initiation interval (II) by optimally placing spill code and simultaneously minimizes the amount of spill code produced. To the best of our knowledge, this is the first integrated formulation for register allocation, optimal spill code generation and scheduling for software pipelined loops. The proposed formulation performs better than the existing heuristics by preventing an increase in II in 11.11% of the loops and generating 18.48% less spill code on average among the loops extracted from Perfect Club and SPEC benchmarks with a moderate increase in compilation time.