951 resultados para Mixed-integer linear programming
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
En este trabajo se estudia la modelización y optimización de procesos industriales de separación mediante el empleo de mezclas de líquidos iónicos como disolventes. Los disolventes habitualmente empleados en procesos de absorción o extracción suelen ser componentes orgánicos muy volátiles y dañinos para la salud humana. Las innovadoras propiedades que presentan los líquidos iónicos, los convierten en alternativas adecuadas para solucionar estos problemas. La presión de vapor de estos compuestos es muy baja y apenas varía con la temperatura. Por tanto, estos compuestos apenas se evaporan incluso a temperaturas altas. Esto supone una gran ventaja en cuanto al empleo de estos compuestos como disolventes industriales ya que permite el reciclaje continuo del disolvente al final del proceso sin necesidad de introducir disolvente fresco debido a la evaporación del mismo. Además, al no evaporarse, estos compuestos no suponen un peligro para la salud humana por inhalación; al contrario que otros disolventes como el benceno. El único peligro para la salud que tienen estos compuestos es por tanto el de contacto directo o ingesta, aunque de hecho muchos Líquidos Iónicos son inocuos con lo cual no existe peligro para la salud ni siquiera a través de estas vías. Los procesos de separación estudiados en este trabajo, se rigen por la termodinámica de fases, concretamente el equilibrio líquido-vapor. Para la predicción de los equilibrios se ha optado por el empleo de modelos COSMO (COnductor-like Screening MOdel). Estos modelos tienen su origen en el empleo de la termodinámica de solvatación y en la mecánica cuántica. En el desarrollo de procesos y productos, químicos e ingenieros frecuentemente precisan de la realización de cálculos de predicción de equilibrios de fase. Previamente al desarrollo de los modelos COSMO, se usaban métodos de contribución de grupos como UNIFAC o modelos de coeficientes de actividad como NRTL.La desventaja de estos métodos, es que requieren parámetros de interacción binaria que únicamente pueden obtenerse mediante ajustes por regresión a partir de resultados experimentales. Debido a esto, estos métodos apenas tienen aplicabilidad para compuestos con grupos funcionales novedosos debido a que no se dispone de datos experimentales para llevar a cabo los ajustes por regresión correspondientes. Una alternativa a estos métodos, es el empleo de modelos de solvatación basados en la química cuántica para caracterizar las interacciones moleculares y tener en cuenta la no idealidad de la fase líquida. Los modelos COSMO, permiten la predicción de equilibrios sin la necesidad de ajustes por regresión a partir de resultados experimentales. Debido a la falta de resultados experimentales de equilibrios líquido-vapor de mezclas en las que se ven involucrados los líquidos iónicos, el empleo de modelos COSMO es una buena alternativa para la predicción de equilibrios de mezclas con este tipo de materiales. Los modelos COSMO emplean las distribuciones superficiales de carga polarizada (sigma profiles) de los compuestos involucrados en la mezcla estudiada para la predicción de los coeficientes de actividad de la misma, definiéndose el sigma profile de una molécula como la distribución de probabilidad de densidad de carga superficial de dicha molécula. Dos de estos modelos son COSMO-RS (Realistic Solvation) y COSMO-SAC (Segment Activity Coefficient). El modelo COSMO-RS fue la primera extensión de los modelos de solvatación basados en continuos dieléctricos a la termodinámica de fases líquidas mientras que el modelo COSMO-SAC es una variación de este modelo, tal y como se explicará posteriormente. Concretamente en este trabajo se ha empleado el modelo COSMO-SAC para el cálculo de los coeficientes de actividad de las mezclas estudiadas. Los sigma profiles de los líquidos iónicos se han obtenido mediante el empleo del software de química computacional Turbomole y el paquete químico-cuántico COSMOtherm. El software Turbomole permite optimizar la geometría de la molécula para hallar la configuración más estable mientras que el paquete COSMOtherm permite la obtención del perfil sigma del compuesto mediante el empleo de los datos proporcionados por Turbomole. Por otra parte, los sigma profiles del resto de componentes se han obtenido de la base de datos Virginia Tech-2005 Sigma Profile Database. Para la predicción del equilibrio a partir de los coeficientes de actividad se ha empleado la Ley de Raoult modificada. Se ha supuesto por tanto que la fracción de cada componente en el vapor es proporcional a la fracción del mismo componente en el líquido, dónde la constante de proporcionalidad es el coeficiente de actividad del componente en la mezcla multiplicado por la presión de vapor del componente y dividido por la presión del sistema. Las presiones de vapor de los componentes se han obtenido aplicando la Ley de Antoine. Esta ecuación describe la relación entre la temperatura y la presión de vapor y se deduce a partir de la ecuación de Clausius-Clapeyron. Todos estos datos se han empleado para la modelización de una separación flash usando el algoritmo de Rachford-Rice. El valor de este modelo reside en la deducción de una función que relaciona las constantes de equilibrio, composición total y fracción de vapor. Para llevar a cabo la implementación del modelado matemático descrito, se ha programado un código empleando el software MATLAB de análisis numérico. Para comprobar la fiabilidad del código programado, se compararon los resultados obtenidos en la predicción de equilibrios de mezclas mediante el código con los resultados obtenidos mediante el simulador ASPEN PLUS de procesos químicos. Debido a la falta de datos relativos a líquidos iónicos en la base de datos de ASPEN PLUS, se han introducido estos componentes como pseudocomponentes, de manera que se han introducido únicamente los datos necesarios de estos componentes para realizar las simulaciones. El modelo COSMO-SAC se encuentra implementado en ASPEN PLUS, de manera que introduciendo los sigma profiles, los volúmenes de la cavidad y las presiones de vapor de los líquidos iónicos, es posible predecir equilibrios líquido-vapor en los que se ven implicados este tipo de materiales. De esta manera pueden compararse los resultados obtenidos con ASPEN PLUS y como el código programado en MATLAB y comprobar la fiabilidad del mismo. El objetivo principal del presente Trabajo Fin de Máster es la optimización de mezclas multicomponente de líquidos iónicos para maximizar la eficiencia de procesos de separación y minimizar los costes de los mismos. La estructura de este problema es la de un problema de optimización no lineal con variables discretas y continuas, es decir, un problema de optimización MINLP (Mixed Integer Non-Linear Programming). Tal y como se verá posteriormente, el modelo matemático de este problema es no lineal. Por otra parte, las variables del mismo son tanto continuas como binarias. Las variables continuas se corresponden con las fracciones molares de los líquidos iónicos presentes en las mezclas y con el caudal de la mezcla de líquidos iónicos. Por otra parte, también se ha introducido un número de variables binarias igual al número de líquidos iónicos presentes en la mezcla. Cada una de estas variables multiplican a las fracciones molares de sus correspondientes líquidos iónicos, de manera que cuando dicha variable es igual a 1, el líquido se encuentra en la mezcla mientras que cuando dicha variable es igual a 0, el líquido iónico no se encuentra presente en dicha mezcla. El empleo de este tipo de variables obliga por tanto a emplear algoritmos para la resolución de problemas de optimización MINLP ya que si todas las variables fueran continuas, bastaría con el empleo de algoritmos para la resolución de problemas de optimización NLP (Non-Linear Programming). Se han probado por tanto diversos algoritmos presentes en el paquete OPTI Toolbox de MATLAB para comprobar cuál es el más adecuado para abordar este problema. Finalmente, una vez validado el código programado, se han optimizado diversas mezclas de líquidos iónicos para lograr la máxima recuperación de compuestos aromáticos en un proceso de absorción de mezclas orgánicas. También se ha usado este código para la minimización del coste correspondiente a la compra de los líquidos iónicos de la mezcla de disolventes empleada en la operación de absorción. En este caso ha sido necesaria la introducción de restricciones relativas a la recuperación de aromáticos en la fase líquida o a la pureza de la mezcla obtenida una vez separada la mezcla de líquidos iónicos. Se han modelizado los dos problemas descritos previamente (maximización de la recuperación de Benceno y minimización del coste de operación) empleando tanto únicamente variables continuas (correspondientes a las fracciones o cantidades molares de los líquidos iónicos) como variables continuas y binarias (correspondientes a cada uno de los líquidos iónicos implicados en las mezclas).
Resumo:
In this paper, we propose a duality theory for semi-infinite linear programming problems under uncertainty in the constraint functions, the objective function, or both, within the framework of robust optimization. We present robust duality by establishing strong duality between the robust counterpart of an uncertain semi-infinite linear program and the optimistic counterpart of its uncertain Lagrangian dual. We show that robust duality holds whenever a robust moment cone is closed and convex. We then establish that the closed-convex robust moment cone condition in the case of constraint-wise uncertainty is in fact necessary and sufficient for robust duality. In other words, the robust moment cone is closed and convex if and only if robust duality holds for every linear objective function of the program. In the case of uncertain problems with affinely parameterized data uncertainty, we establish that robust duality is easily satisfied under a Slater type constraint qualification. Consequently, we derive robust forms of the Farkas lemma for systems of uncertain semi-infinite linear inequalities.
Resumo:
Multiobjective Generalized Disjunctive Programming (MO-GDP) optimization has been used for the synthesis of an important industrial process, isobutane alkylation. The two objective functions to be simultaneously optimized are the environmental impact, determined by means of LCA (Life Cycle Assessment), and the economic potential of the process. The main reason for including the minimization of the environmental impact in the optimization process is the widespread environmental concern by the general public. For the resolution of the problem we employed a hybrid simulation- optimization methodology, i.e., the superstructure of the process was developed directly in a chemical process simulator connected to a state of the art optimizer. The model was formulated as a GDP and solved using a logic algorithm that avoids the reformulation as MINLP -Mixed Integer Non Linear Programming-. Our research gave us Pareto curves compounded by three different configurations where the LCA has been assessed by two different parameters: global warming potential and ecoindicator-99.
Resumo:
Cooperative communication has gained much interest due to its ability to exploit the broadcasting nature of the wireless medium to mitigate multipath fading. There has been considerable amount of research on how cooperative transmission can improve the performance of the network by focusing on the physical layer issues. During the past few years, the researchers have started to take into consideration cooperative transmission in routing and there has been a growing interest in designing and evaluating cooperative routing protocols. Most of the existing cooperative routing algorithms are designed to reduce the energy consumption; however, packet collision minimization using cooperative routing has not been addressed yet. This dissertation presents an optimization framework to minimize collision probability using cooperative routing in wireless sensor networks. More specifically, we develop a mathematical model and formulate the problem as a large-scale Mixed Integer Non-Linear Programming problem. We also propose a solution based on the branch and bound algorithm augmented with reducing the search space (branch and bound space reduction). The proposed strategy builds up the optimal routes from each source to the sink node by providing the best set of hops in each route, the best set of relays, and the optimal power allocation for the cooperative transmission links. To reduce the computational complexity, we propose two near optimal cooperative routing algorithms. In the first near optimal algorithm, we solve the problem by decoupling the optimal power allocation scheme from optimal route selection. Therefore, the problem is formulated by an Integer Non-Linear Programming, which is solved using a branch and bound space reduced method. In the second near optimal algorithm, the cooperative routing problem is solved by decoupling the transmission power and the relay node se- lection from the route selection. After solving the routing problems, the power allocation is applied in the selected route. Simulation results show the algorithms can significantly reduce the collision probability compared with existing cooperative routing schemes.
Resumo:
I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.
In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.
Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.
I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and
discuss some implications for capital regulation policy and stress testing.
Resumo:
In this paper, a joint location-inventory model is proposed that simultaneously optimises strategic supply chain design decisions such as facility location and customer allocation to facilities, and tactical-operational inventory management and production scheduling decisions. All this is analysed in a context of demand uncertainty and supply uncertainty. While demand uncertainty stems from potential fluctuations in customer demands over time, supply-side uncertainty is associated with the risk of “disruption” to which facilities may be subject. The latter is caused by external factors such as natural disasters, strikes, changes of ownership and information technology security incidents. The proposed model is formulated as a non-linear mixed integer programming problem to minimise the expected total cost, which includes four basic cost items: the fixed cost of locating facilities at candidate sites, the cost of transport from facilities to customers, the cost of working inventory, and the cost of safety stock. Next, since the optimisation problem is very complex and the number of evaluable instances is very low, a "matheuristic" solution is presented. This approach has a twofold objective: on the one hand, it considers a larger number of facilities and customers within the network in order to reproduce a supply chain configuration that more closely reflects a real-world context; on the other hand, it serves to generate a starting solution and perform a series of iterations to try to improve it. Thanks to this algorithm, it was possible to obtain a solution characterised by a lower total system cost than that observed for the initial solution. The study concludes with some reflections and the description of possible future insights.
Resumo:
The present work had as objective uses a model of lineal programming algorithm to optimize the use of the water in the District of Irrigation Baixo Acarau-CE proposing the best combination of crop types and areas established of 8,0 ha. The model aim maximize the net benefit of small farmer, incorporating the constraints in water and land availability, and constraints on the market. Considering crop types and the constraints, the study lead to the following conclusions: 1. The water availability in the District was not a limiting resources, while all available land was assigned in six of the seven cultivation plans analyzed. Furthermore, water availability was a restrictive factor as compared with land only when its availability was made to reduce to 60% of its actual value; 2. The combination of soursop and melon plants was the one that presented the largest net benefit, corresponding to R$ 5,250.00/ha/yr. The planting area for each crop made up to 50% of the area of the plot; 3. The plan that suggests the substitution of the cultivation of the soursop, since a decrease in annual net revenue of 5.87%. However, the plan that contemplates the simultaneous substitution of both soursop and melon produced the lowest liquid revenue, with reduction of 33.8%.
Resumo:
This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.
Resumo:
In the proposed model, the independent system operator (ISO) provides the opportunity for maintenance outage rescheduling of generating units before each short-term (ST) time interval. Long-term (LT) scheduling for 1 or 2 years in advance is essential for the ISO and the generation companies (GENCOs) to decide their LT strategies; however, it is not possible to be exactly followed and requires slight adjustments. The Cournot-Nash equilibrium is used to characterize the decision-making procedure of an individual GENCO for ST intervals considering the effective coordination with LT plans. Random inputs, such as parameters of the demand function of loads, hourly demand during the following ST time interval and the expected generation pattern of the rivals, are included as scenarios in the stochastic mixed integer program defined to model the payoff-maximizing objective of a GENCO. Scenario reduction algorithms are used to deal with the computational burden. Two reliability test systems were chosen to illustrate the effectiveness of the proposed model for the ST decision-making process for future planned outages from the point of view of a GENCO.
Resumo:
We study whether privatization of a public firm improves (or deteriorates) the environment in a mixed Stackelberg duopoly with the public firm as the leader. We assume that each firm can prevent pollution by undertaking abatement measures. We get that, since in the mixed market the industry output is higher than in the private market, the abatement levels are also higher in the mixed market, and, thus, environmental tax rate in the mixed duopoly is higher than that in the privatized duopoly. Furthermore, the environment is more damaged in the mixed than in the private market. The overall effect on the social welfare is that it will becomes higher in the private than in the mixed market.
Resumo:
Programa Doutoral em Engenharia Industrial e de Sistemas.
Resumo:
A Investigação Operacional vem demonstrando ser uma valiosa ferramenta de gestão nos dias de hoje em que se vive num mercado cada vez mais competitivo. Através da Programação Linear pode-se reproduzir matematicamente um problema de maximização dos resultados ou minimização dos custos de produção com o propósito de auxiliar os gestores na tomada de decisão. A Programação Linear é um método matemático em que a função objectivo e as restrições assumem características lineares, com diversas aplicações no controlo de gestão, envolvendo normalmente problemas de utilização dos recursos disponíveis sujeitos a limitações impostas pelo processo produtivo ou pelo mercado. O objectivo geral deste trabalho é o de propor um modelo de Programação Linear para a programação ou produção e alocação de recursos necessários. Optimizar uma quantidade física designada função objectivo, tendo em conta um conjunto de condicionalismos endógenas às actividades em gestão. O objectivo crucial é dispor um modelo de apoio à gestão contribuindo assim para afectação eficiente de recursos escassos à disposição da unidade económica. Com o trabalho desenvolvido ficou patente a importância da abordagem quantitativa como recurso imprescindível de apoio ao processo de decisão. The operational research has proven to be a valuable management tool today we live in an increasingly competitive market. Through Linear Programming can be mathematically reproduce a problem of maximizing performance or minimizing production costs in order to assist managers in decision making. The Linear Programming is a mathematical method in which the objective function and constraints are linear features, with several applications in the control of management, usually involving problems of resource use are available subject to limitations imposed by the production process or the market. The overall objective of this work is to propose a Linear Programming model for scheduling or production and allocation of necessary resources. Optimizing a physical quantity called the objective function, given a set of endogenous constraints on management thus contributing to efficient allocation of scarce resources available to the economic unit. With the work has demonstrated the importance of the quantitative approach as essential resource to support the decision process.