77 resultados para Mixed-integer nonlinear programming (MINLP)
em Indian Institute of Science - Bangalore - Índia
Resumo:
The optimal design of a multiproduct batch chemical plant is formulated as a multiobjective optimization problem, and the resulting constrained mixed-integer nonlinear program (MINLP) is solved by the nondominated sorting genetic algorithm approach (NSGA-II). By putting bounds on the objective function values, the constrained MINLP problem can be solved efficiently by NSGA-II to generate a set of feasible nondominated solutions in the range desired by the decision-maker in a single run of the algorithm. The evolution of the entire set of nondominated solutions helps the decision-maker to make a better choice of the appropriate design from among several alternatives. The large set of solutions also provides a rich source of excellent initial guesses for solution of the same problem by alternative approaches to achieve any specific target for the objective functions
Resumo:
The problem of controlling the vibration pattern of a driven string is considered. The basic question dealt with here is to find the control forces which reduce the energy of vibration of a driven string over a prescribed portion of its length while maintaining the energy outside that length above a desired value. The criterion of keeping the response outside the region of energy reduction as close to the original response as possible is introduced as an additional constraint. The slack unconstrained minimization technique (SLUMT) has been successfully applied to solve the above problem. The effect of varying the phase of the control forces (which results in a six-variable control problem) is then studied. The nonlinear programming techniques which have been effectively used to handle problems involving many variables and constraints therefore offer a powerful tool for the solution of vibration control problems.
Resumo:
In achieving higher instruction level parallelism, software pipelining increases the register pressure in the loop. The usefulness of the generated schedule may be restricted to cases where the register pressure is less than the available number of registers. Spill instructions need to be introduced otherwise. But scheduling these spill instructions in the compact schedule is a difficult task. Several heuristics have been proposed to schedule spill code. These heuristics may generate more spill code than necessary, and scheduling them may necessitate increasing the initiation interval. We model the problem of register allocation with spill code generation and scheduling in software pipelined loops as a 0-1 integer linear program. The formulation minimizes the increase in initiation interval (II) by optimally placing spill code and simultaneously minimizes the amount of spill code produced. To the best of our knowledge, this is the first integrated formulation for register allocation, optimal spill code generation and scheduling for software pipelined loops. The proposed formulation performs better than the existing heuristics by preventing an increase in II in 11.11% of the loops and generating 18.48% less spill code on average among the loops extracted from Perfect Club and SPEC benchmarks with a moderate increase in compilation time.
Resumo:
his paper studies the problem of designing a logical topology over a wavelength-routed all-optical network (AON) physical topology, The physical topology consists of the nodes and fiber links in the network, On an AON physical topology, we can set up lightpaths between pairs of nodes, where a lightpath represents a direct optical connection without any intermediate electronics, The set of lightpaths along with the nodes constitutes the logical topology, For a given network physical topology and traffic pattern (relative traffic distribution among the source-destination pairs), our objective is to design the logical topology and the routing algorithm on that topology so as to minimize the network congestion while constraining the average delay seen by a source-destination pair and the amount of processing required at the nodes (degree of the logical topology), We will see that ignoring the delay constraints can result in fairly convoluted logical topologies with very long delays, On the other hand, in all our examples, imposing it results in a minimal increase in congestion, While the number of wavelengths required to imbed the resulting logical topology on the physical all optical topology is also a constraint in general, we find that in many cases of interest this number can be quite small, We formulate the combined logical topology design and routing problem described above (ignoring the constraint on the number of available wavelengths) as a mixed integer linear programming problem which we then solve for a number of cases of a six-node network, Since this programming problem is computationally intractable for larger networks, we split it into two subproblems: logical topology design, which is computationally hard and will probably require heuristic algorithms, and routing, which can be solved by a linear program, We then compare the performance of several heuristic topology design algorithms (that do take wavelength assignment constraints into account) against that of randomly generated topologies, as well as lower bounds derived in the paper.
Resumo:
Electronic exchanges are double-sided marketplaces that allow multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. Two important issues in the design of exchanges are (1) trade determination (determining the number of goods traded between any buyer-seller pair) and (2) pricing. In this paper we address the trade determination issue for one-shot, multi-attribute exchanges that trade multiple units of the same good. The bids are configurable with separable additive price functions over the attributes and each function is continuous and piecewise linear. We model trade determination as mixed integer programming problems for different possible bid structures and show that even in two-attribute exchanges, trade determination is NP-hard for certain bid structures. We also make some observations on the pricing issues that are closely related to the mixed integer formulations.
Resumo:
This paper is concerned with the reliability optimization of a spatially redundant system, subject to various constraints, by using nonlinear programming. The constrained optimization problem is converted into a sequence of unconstrained optimization problems by using a penalty function. The new problem is then solved by the conjugate gradient method. The advantages of this method are highlighted.
Resumo:
This article addresses the problem of how to select the optimal combination of sensors and how to determine their optimal placement in a surveillance region in order to meet the given performance requirements at a minimal cost for a multimedia surveillance system. We propose to solve this problem by obtaining a performance vector, with its elements representing the performances of subtasks, for a given input combination of sensors and their placement. Then we show that the optimal sensor selection problem can be converted into the form of Integer Linear Programming problem (ILP) by using a linear model for computing the optimal performance vector corresponding to a sensor combination. Optimal performance vector corresponding to a sensor combination refers to the performance vector corresponding to the optimal placement of a sensor combination. To demonstrate the utility of our technique, we design and build a surveillance system consisting of PTZ (Pan-Tilt-Zoom) cameras and active motion sensors for capturing faces. Finally, we show experimentally that optimal placement of sensors based on the design maximizes the system performance.
Resumo:
This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.
Resumo:
The overall performance of random early detection (RED) routers in the Internet is determined by the settings of their associated parameters. The non-availability of a functional relationship between the RED performance and its parameters makes it difficult to implement optimization techniques directly in order to optimize the RED parameters. In this paper, we formulate a generic optimization framework using a stochastically bounded delay metric to dynamically adapt the RED parameters. The constrained optimization problem thus formulated is solved using traditional nonlinear programming techniques. Here, we implement the barrier and penalty function approaches, respectively. We adopt a second-order nonlinear optimization framework and propose a novel four-timescale stochastic approximation algorithm to estimate the gradient and Hessian of the barrier and penalty objectives and update the RED parameters. A convergence analysis of the proposed algorithm is briefly sketched. We perform simulations to evaluate the performance of our algorithm with both barrier and penalty objectives and compare these with RED and a variant of it in the literature. We observe an improvement in performance using our proposed algorithm over RED, and the above variant of it.
Resumo:
This paper is devoted to the improvement of the measuring range of inverted V-notch (IVN) weir, a practical linear sharp-crested weir, designed earlier by the writers. The range of linearity of IVN can be considerably enhanced (by more than 200%) by the addition of a retangular weir of width 0.265W (W = half crest width) at a depth of 0.735d (d = altitude of IVN), above the crest of the weir, which is equivalent to providing at this depth two vertical straight lines to the IVN, resulting in a chimney-shaped profile; hence, the modified weir is named chimney weir. The design parameters of the weir, that is, the linearity range, base flow depth, and datum constant, which fixes the reference plane of the weir, are estimated by solving the nonlinear programming problem using a numerical optimization procedure. For flows through this weir above a depth of 0.22d, the discharges are proportional to the depth of flow measured above a reference plane situated at 0.08d above the weir crest for all heads in the range 0.22d <= h <= 2.43d, within a maximum percentage deviation of ±1.5 from the theoretical discharge. A significant result of the analysis is that the same linear head-discharge relationship governing the flow through the IVN is also valid for the extended chimney weir. Experiments with three different chimney weirs show excellent agreement with the theory by giving a constant average coefficient of discharge for each weir.
Resumo:
Combinatorial exchanges are double sided marketplaces with multiple sellers and multiple buyers trading with the help of combinatorial bids. The allocation and other associated problems in such exchanges are known to be among the hardest to solve among all economic mechanisms. It has been shown that the problems of surplus maximization or volume maximization in combinatorial exchanges are inapproximable even with free disposal. In this paper, the surplus maximization problem is formulated as an integer linear programming problem and we propose a Lagrangian relaxation based heuristic to find a near optimal solution. We develop computationally efficient tâtonnement mechanisms for clearing combinatorial exchanges where the Lagrangian multipliers can be interpreted as the prices of the items set by the exchange in each iteration. Our mechanisms satisfy Individual-rationality and Budget-nonnegativity properties. The computational experiments performed on representative data sets show that the proposed heuristic produces a feasible solution with negligible optimality gap.
Resumo:
Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.
Resumo:
This paper presents a method for placement of Phasor Measurement Units, ensuring the monitoring of vulnerable buses which are obtained based on transient stability analysis of the overall system. Real-time monitoring of phase angles across different nodes, which indicates the proximity to instability, the very purpose will be well defined if the PMUs are placed at buses which are more vulnerable. The issue is to identify the key buses where the PMUs should be placed when the transient stability prediction is taken into account considering various disturbances. Integer Linear Programming technique with equality and inequality constraints is used to find out the optimal placement set with key buses identified from transient stability analysis. Results on IEEE-14 bus system are presented to illustrate the proposed approach.
Resumo:
Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.
Resumo:
In metropolitan cities, public transportation service plays a vital role in mobility of people, and it has to introduce new routes more frequently due to the fast development of the city in terms of population growth and city size. Whenever there is introduction of new route or increase in frequency of buses, the nonrevenue kilometers covered by the buses increases as depot and route starting/ending points are at different places. This non-revenue kilometers or dead kilometers depends on the distance between depot and route starting point/ending point. The dead kilometers not only results in revenue loss but also results in an increase in the operating cost because of the extra kilometers covered by buses. Reduction of dead kilometers is necessary for the economic growth of the public transportation system. Therefore, in this study, the attention is focused on minimizing dead kilometers by optimizing allocation of buses to depots depending upon the shortest distance between depot and route starting/ending points. We consider also depot capacity and time period of operation during allocation of buses to ensure parking safety and proper maintenance of buses. Mathematical model is developed considering the aforementioned parameters, which is a mixed integer program, and applied to Bangalore Metropolitan Transport Corporation (BMTC) routes operating presently in order to obtain optimal bus allocation to depots. Database for dead kilometers of depots in BMTC for all the schedules are generated using the Form-4 (trip sheet) of each schedule to analyze depot-wise and division-wise dead kilometers. This study also suggests alternative locations where depots can be located to reduce dead kilometers. Copyright (C) 2015 John Wiley & Sons, Ltd.