982 resultados para Mixed-integer quadratically-constrained programming
Resumo:
Optical networks based on passive-star couplers and employing WDM have been proposed for deployment in local and metropolitan areas. These networks suffer from splitting, coupling, and attenuation losses. Since there is an upper bound on transmitter power and a lower bound on receiver sensitivity, optical amplifiers are usually required to compensate for the power losses mentioned above. Due to the high cost of amplifiers, it is desirable to minimize their total number in the network. However, an optical amplifier has constraints on the maximum gain and the maximum output power it can supply; thus, optical amplifier placement becomes a challenging problem. In fact, the general problem of minimizing the total amplifier count is a mixed-integer nonlinear problem. Previous studies have attacked the amplifier-placement problem by adding the “artificial” constraint that all wavelengths, which are present at a particular point in a fiber, be at the same power level. This constraint simplifies the problem into a solvable mixed integer linear program. Unfortunately, this artificial constraint can miss feasible solutions that have a lower amplifier count but do not have the equally powered wavelengths constraint. In this paper, we present a method to solve the minimum amplifier- placement problem, while avoiding the equally powered wavelength constraint. We demonstrate that, by allowing signals to operate at different power levels, our method can reduce the number of amplifiers required.
Resumo:
Routing and wavelength assignment (RWA) is an important problem that arises in wavelength division multiplexed (WDM) optical networks. Previous studies have solved many variations of this problem under the assumption of perfect conditions regarding the power of a signal. In this paper, we investigate this problem while allowing for degradation of routed signals by components such as taps, multiplexers, and fiber links. We assume that optical amplifiers are preplaced. We investigate the problem of routing the maximum number of connections while maintaining proper power levels. The problem is formulated as a mixed-integer nonlinear program and two-phase hybrid solution approaches employing two different heuristics are developed
Resumo:
Optical networks based on passive star couplers and employing wavelength-division multiplexing (WDhf) have been proposed for deployment in local and metropolitan areas. Amplifiers are required in such networks to compensate for the power losses due to splitting and attenuation. However, an optical amplifier has constraints on the maximum gain and the maximum output power it can supply; thus optical amplifier placement becomes a challenging problem. The general problem of minimizing the total amplifier count, subject to the device constraints, is a mixed-integer non-linear problem. Previous studies have attacked the amplifier placement problem by adding the “artificial” constraint that all wavelengths, which are present at a particular point in a fiber, be at the same power level. In this paper, we present a method to solve the minimum amplifier- placement problem while avoiding the equally powered- wavelength constraint. We demonstrate that, by allowing signals to operate at different power levels, our method can reduce the number of amplifiers required in several small to medium-sized networks.
Resumo:
In this paper, we propose three novel mathematical models for the two-stage lot-sizing and scheduling problems present in many process industries. The problem shares a continuous or quasi-continuous production feature upstream and a discrete manufacturing feature downstream, which must be synchronized. Different time-based scale representations are discussed. The first formulation encompasses a discrete-time representation. The second one is a hybrid continuous-discrete model. The last formulation is based on a continuous-time model representation. Computational tests with state-of-the-art MIP solver show that the discrete-time representation provides better feasible solutions in short running time. On the other hand, the hybrid model achieves better solutions for longer computational times and was able to prove optimality more often. The continuous-type model is the most flexible of the three for incorporating additional operational requirements, at a cost of having the worst computational performance. Journal of the Operational Research Society (2012) 63, 1613-1630. doi:10.1057/jors.2011.159 published online 7 March 2012
Resumo:
Im operativen Betrieb einer Stückgutspeditionsanlage entscheidet der Betriebslenker bzw. der Disponent in einem ersten Schritt darüber, an welche Tore die Fahrzeuge zur Be- und Entladung andocken sollen. Darüber hinaus muss er für jede Tour ein Zeitfenster ausweisen innerhalb dessen sie das jeweilige Tor belegt. Durch die örtliche und zeitliche Fahrzeug-Tor-Zuordnung wird der für den innerbetrieblichen Umschlagprozess erforderliche Ressourcenaufwand in Form von zu fahrenden Wegstrecken oder aber Gabelstaplerstunden bestimmt. Ein Ziel der Planungsaufgabe ist somit, die Zuordnung der Fahrzeuge an die Tore so vorzunehmen, dass dabei minimale innerbetriebliche Wegstrecken entstehen. Dies führt zu einer minimalen Anzahl an benötigten Umschlagmittelressourcen. Darüber hinaus kann es aber auch zweckmäßig sein, die Fahrzeuge möglichst früh an die Tore anzudocken. Jede Tour verfügt über einen individuellen Fahrplan, der Auskunft über den Ankunftszeitpunkt sowie den Abfahrtszeitpunkt der jeweiligen Tour von der Anlage gibt. Nur innerhalb dieses Zeitfensters darf der Disponent die Tour einem der Tore zuweisen. Geschieht die Zuweisung nicht sofort nach Ankunft in der Anlage, so muss das Fahrzeug auf einer Parkfläche warten. Eine Minimierung der Wartezeiten ist wünschenswert, damit das Gelände der Anlage möglichst nicht durch zuviele Fahrzeuge gleichzeitig belastet wird. Es kann vor allem aber auch im Hinblick auf das Reservieren der Tore für zeitkritische Touren sinnvoll sein, Fahrzeuge möglichst früh abzufertigen. Am Lehrstuhl Verkehrssysteme und -logistik (VSL) der Universität Dortmund wurde die Entscheidungssituation im Rahmen eines Forschungsprojekts bei der Stiftung Industrieforschung in Anlehnung an ein zeitdiskretes Mehrgüterflussproblem mit unsplittable flow Bedingungen modelliert. Die beiden Zielsetzungen wurden dabei in einer eindimensionalen Zielfunktion integriert. Das resultierende Mixed Integer Linear Programm (MILP) wurde programmiert und für mittlere Szenarien durch Eingabe in den Optimization Solver CPlex mit dem dort implementierten exakten Branch-and-Cut Verfahren gelöst. Parallel wurde im Rahmen einer Kooperation zwischen dem Lehrstuhl VSL und dem Unternehmen hafa Docking Systems, einem der weltweit führenden Tor und Rampenhersteller, für die gleiche Planungsaufgabe ein heuristisches Scheduling Verfahren sowie ein Dispositionsleitstand namens LoadDock Navigation entwickelt. Der Dispositionsleitstand dient der optimalen Steuerung der Torbelegungen in logistischen Anlagen. In dem Leitstand wird planerische Intelligenz in Form des heuristischen Schedulingverfahrens, technische Neuerungen in der Rampentechnik in Form von Sensoren und das Expertenwissen des Disponenten in einem Tool verbunden. Das mathematische Modell sowie der Prototyp mit der integrierten Heuristik werden im Rahmen dieses Artikels vorgestellt.
Resumo:
This paper deals with an event-bus tour booked by Bollywood film fans. During the tour, the participants visit selected locations of famous Bollywood films at various sites in Switzerland. Moreover, the tour includes stops for lunch and shopping. Each day, up to five buses operate the tour; for organizational reasons, two or more buses cannot stay at the same location simultaneously. The planning problem is how to compute a feasible schedule for each bus such that the total waiting time (primary objective) and the total travel time (secondary objective) are minimized. We formulate this problem as a mixed-integer linear program, and we report on computational results obtained with the Gurobi solver.
Resumo:
O problema de Planejamento da Expansão de Sistemas de Distribuição (PESD) visa determinar diretrizes para a expansão da rede considerando a crescente demanda dos consumidores. Nesse contexto, as empresas distribuidoras de energia elétrica têm o papel de propor ações no sistema de distribuição com o intuito de adequar o fornecimento da energia aos padrões exigidos pelos órgãos reguladores. Tradicionalmente considera-se apenas a minimização do custo global de investimento de planos de expansão, negligenciando-se questões de confiabilidade e robustez do sistema. Como consequência, os planos de expansão obtidos levam o sistema de distribuição a configurações que são vulneráveis a elevados cortes de carga na ocorrência de contingências na rede. Este trabalho busca a elaboração de uma metodologia para inserir questões de confiabilidade e risco ao problema PESD tradicional, com o intuito de escolher planos de expansão que maximizem a robustez da rede e, consequentemente, atenuar os danos causados pelas contingências no sistema. Formulou-se um modelo multiobjetivo do problema PESD em que se minimizam dois objetivos: o custo global (que incorpora custo de investimento, custo de manutenção, custo de operação e custo de produção de energia) e o risco de implantação de planos de expansão. Para ambos os objetivos, são formulados modelos lineares inteiros mistos que são resolvidos utilizando o solver CPLEX através do software GAMS. Para administrar a busca por soluções ótimas, optou-se por programar em linguagem C++ dois Algoritmos Evolutivos: Non-dominated Sorting Genetic Algorithm-2 (NSGA2) e Strength Pareto Evolutionary Algorithm-2 (SPEA2). Esses algoritmos mostraram-se eficazes nessa busca, o que foi constatado através de simulações do planejamento da expansão de dois sistemas testes adaptados da literatura. O conjunto de soluções encontradas nas simulações contém planos de expansão com diferentes níveis de custo global e de risco de implantação, destacando a diversidade das soluções propostas. Algumas dessas topologias são ilustradas para se evidenciar suas diferenças.
Resumo:
The economic design of a distillation column or distillation sequences is a challenging problem that has been addressed by superstructure approaches. However, these methods have not been widely used because they lead to mixed-integer nonlinear programs that are hard to solve, and require complex initialization procedures. In this article, we propose to address this challenging problem by substituting the distillation columns by Kriging-based surrogate models generated via state of the art distillation models. We study different columns with increasing difficulty, and show that it is possible to get accurate Kriging-based surrogate models. The optimization strategy ensures that convergence to a local optimum is guaranteed for numerical noise-free models. For distillation columns (slightly noisy systems), Karush–Kuhn–Tucker optimality conditions cannot be tested directly on the actual model, but still we can guarantee a local minimum in a trust region of the surrogate model that contains the actual local minimum.
Resumo:
This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.
Resumo:
The deployment of bioenergy technologies is a key part of UK and European renewable energy policy. A key barrier to the deployment of bioenergy technologies is the management of biomass supply chains including the evaluation of suppliers and the contracting of biomass. In the undeveloped biomass for energy market buyers of biomass are faced with three major challenges during the development of new bioenergy projects. What characteristics will a certain supply of biomass have, how to evaluate biomass suppliers and which suppliers to contract with in order to provide a portfolio of suppliers that best satisfies the needs of the project and its stakeholder group whilst also satisfying crisp and non-crisp technological constraints. The problem description is taken from the situation faced by the industrial partner in this research, Express Energy Ltd. This research tackles these three areas separately then combines them to form a decision framework to assist biomass buyers with the strategic sourcing of biomass. The BioSS framework. The BioSS framework consists of three modes which mirror the development stages of bioenergy projects. BioSS.2 mode for early stage development, BioSS.3 mode for financial close stage and BioSS.Op for the operational phase of the project. BioSS is formed of a fuels library, a supplier evaluation module and an order allocation module, a Monte-Carlo analysis module is also included to evaluate the accuracy of the recommended portfolios. In each mode BioSS can recommend which suppliers should be contracted with and how much material should be purchased from each. The recommended blend should have chemical characteristics within the technological constraints of the conversion technology and also best satisfy the stakeholder group. The fuels library is made up from a wide variety of sources and contains around 100 unique descriptions of potential biomass sources that a developer may encounter. The library takes a wide data collection approach and has the aim of allowing for estimates to be made of biomass characteristics without expensive and time consuming testing. The supplier evaluation part of BioSS uses a QFD-AHP method to give importance weightings to 27 different evaluating criteria. The evaluating criteria have been compiled from interviews with stakeholders and policy and position documents and the weightings have been assigned using a mixture of workshops and expert interview. The weighted importance scores allow potential suppliers to better tailor their business offering and provides a robust framework for decision makers to better understand the requirements of the bioenergy project stakeholder groups. The order allocation part of BioSS uses a chance-constrained programming approach to assign orders of material between potential suppliers based on the chemical characteristics of those suppliers and the preference score of those suppliers. The optimisation program finds the portfolio of orders to allocate to suppliers to give the highest performance portfolio in the eyes of the stakeholder group whilst also complying with technological constraints. The technological constraints can be breached if the decision maker requires by setting the constraint as a chance-constraint. This allows a wider range of biomass sources to be procured and allows a greater overall performance to be realised than considering crisp constraints or using deterministic programming approaches. BioSS is demonstrated against two scenarios faced by UK bioenergy developers. The first is a large scale combustion power project, the second a small scale gasification project. The Bioss is applied in each mode for both scenarios and is shown to adapt the solution to the stakeholder group importance and the different constraints of the different conversion technologies whilst finding a globally optimal portfolio for stakeholder satisfaction.
Resumo:
As microblog services such as Twitter become a fast and convenient communication approach, identification of trendy topics in microblog services has great academic and business value. However detecting trendy topics is very challenging due to huge number of users and short-text posts in microblog diffusion networks. In this paper we introduce a trendy topics detection system under computation and communication resource constraints. In stark contrast to retrieving and processing the whole microblog contents, we develop an idea of selecting a small set of microblog users and processing their posts to achieve an overall acceptable trendy topic coverage, without exceeding resource budget for detection. We formulate the selection operation of these subset users as mixed-integer optimization problems, and develop heuristic algorithms to compute their approximate solutions. The proposed system is evaluated with real-time test data retrieved from Sina Weibo, the dominant microblog service provider in China. It's shown that by monitoring 500 out of 1.6 million microblog users and tracking their microposts (about 15,000 daily) with our system, nearly 65% trendy topics can be detected, while on average 5 hours earlier before they appear in Sina Weibo official trends.
Resumo:
We propose a cost-effective hot event detection system over Sina Weibo platform, currently the dominant microblogging service provider in China. The problem of finding a proper subset of microbloggers under resource constraints is formulated as a mixed-integer problem for which heuristic algorithms are developed to compute approximate solution. Preliminary results show that by tracking about 500 out of 1.6 million candidate microbloggers and processing 15,000 microposts daily, 62% of the hot events can be detected five hours on average earlier than they are published by Weibo.
Resumo:
MSC 2010: 49K05, 26A33
Resumo:
This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^
Resumo:
This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.