983 resultados para Mixed-integer linear programing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este trabajo se estudia la modelización y optimización de procesos industriales de separación mediante el empleo de mezclas de líquidos iónicos como disolventes. Los disolventes habitualmente empleados en procesos de absorción o extracción suelen ser componentes orgánicos muy volátiles y dañinos para la salud humana. Las innovadoras propiedades que presentan los líquidos iónicos, los convierten en alternativas adecuadas para solucionar estos problemas. La presión de vapor de estos compuestos es muy baja y apenas varía con la temperatura. Por tanto, estos compuestos apenas se evaporan incluso a temperaturas altas. Esto supone una gran ventaja en cuanto al empleo de estos compuestos como disolventes industriales ya que permite el reciclaje continuo del disolvente al final del proceso sin necesidad de introducir disolvente fresco debido a la evaporación del mismo. Además, al no evaporarse, estos compuestos no suponen un peligro para la salud humana por inhalación; al contrario que otros disolventes como el benceno. El único peligro para la salud que tienen estos compuestos es por tanto el de contacto directo o ingesta, aunque de hecho muchos Líquidos Iónicos son inocuos con lo cual no existe peligro para la salud ni siquiera a través de estas vías. Los procesos de separación estudiados en este trabajo, se rigen por la termodinámica de fases, concretamente el equilibrio líquido-vapor. Para la predicción de los equilibrios se ha optado por el empleo de modelos COSMO (COnductor-like Screening MOdel). Estos modelos tienen su origen en el empleo de la termodinámica de solvatación y en la mecánica cuántica. En el desarrollo de procesos y productos, químicos e ingenieros frecuentemente precisan de la realización de cálculos de predicción de equilibrios de fase. Previamente al desarrollo de los modelos COSMO, se usaban métodos de contribución de grupos como UNIFAC o modelos de coeficientes de actividad como NRTL.La desventaja de estos métodos, es que requieren parámetros de interacción binaria que únicamente pueden obtenerse mediante ajustes por regresión a partir de resultados experimentales. Debido a esto, estos métodos apenas tienen aplicabilidad para compuestos con grupos funcionales novedosos debido a que no se dispone de datos experimentales para llevar a cabo los ajustes por regresión correspondientes. Una alternativa a estos métodos, es el empleo de modelos de solvatación basados en la química cuántica para caracterizar las interacciones moleculares y tener en cuenta la no idealidad de la fase líquida. Los modelos COSMO, permiten la predicción de equilibrios sin la necesidad de ajustes por regresión a partir de resultados experimentales. Debido a la falta de resultados experimentales de equilibrios líquido-vapor de mezclas en las que se ven involucrados los líquidos iónicos, el empleo de modelos COSMO es una buena alternativa para la predicción de equilibrios de mezclas con este tipo de materiales. Los modelos COSMO emplean las distribuciones superficiales de carga polarizada (sigma profiles) de los compuestos involucrados en la mezcla estudiada para la predicción de los coeficientes de actividad de la misma, definiéndose el sigma profile de una molécula como la distribución de probabilidad de densidad de carga superficial de dicha molécula. Dos de estos modelos son COSMO-RS (Realistic Solvation) y COSMO-SAC (Segment Activity Coefficient). El modelo COSMO-RS fue la primera extensión de los modelos de solvatación basados en continuos dieléctricos a la termodinámica de fases líquidas mientras que el modelo COSMO-SAC es una variación de este modelo, tal y como se explicará posteriormente. Concretamente en este trabajo se ha empleado el modelo COSMO-SAC para el cálculo de los coeficientes de actividad de las mezclas estudiadas. Los sigma profiles de los líquidos iónicos se han obtenido mediante el empleo del software de química computacional Turbomole y el paquete químico-cuántico COSMOtherm. El software Turbomole permite optimizar la geometría de la molécula para hallar la configuración más estable mientras que el paquete COSMOtherm permite la obtención del perfil sigma del compuesto mediante el empleo de los datos proporcionados por Turbomole. Por otra parte, los sigma profiles del resto de componentes se han obtenido de la base de datos Virginia Tech-2005 Sigma Profile Database. Para la predicción del equilibrio a partir de los coeficientes de actividad se ha empleado la Ley de Raoult modificada. Se ha supuesto por tanto que la fracción de cada componente en el vapor es proporcional a la fracción del mismo componente en el líquido, dónde la constante de proporcionalidad es el coeficiente de actividad del componente en la mezcla multiplicado por la presión de vapor del componente y dividido por la presión del sistema. Las presiones de vapor de los componentes se han obtenido aplicando la Ley de Antoine. Esta ecuación describe la relación entre la temperatura y la presión de vapor y se deduce a partir de la ecuación de Clausius-Clapeyron. Todos estos datos se han empleado para la modelización de una separación flash usando el algoritmo de Rachford-Rice. El valor de este modelo reside en la deducción de una función que relaciona las constantes de equilibrio, composición total y fracción de vapor. Para llevar a cabo la implementación del modelado matemático descrito, se ha programado un código empleando el software MATLAB de análisis numérico. Para comprobar la fiabilidad del código programado, se compararon los resultados obtenidos en la predicción de equilibrios de mezclas mediante el código con los resultados obtenidos mediante el simulador ASPEN PLUS de procesos químicos. Debido a la falta de datos relativos a líquidos iónicos en la base de datos de ASPEN PLUS, se han introducido estos componentes como pseudocomponentes, de manera que se han introducido únicamente los datos necesarios de estos componentes para realizar las simulaciones. El modelo COSMO-SAC se encuentra implementado en ASPEN PLUS, de manera que introduciendo los sigma profiles, los volúmenes de la cavidad y las presiones de vapor de los líquidos iónicos, es posible predecir equilibrios líquido-vapor en los que se ven implicados este tipo de materiales. De esta manera pueden compararse los resultados obtenidos con ASPEN PLUS y como el código programado en MATLAB y comprobar la fiabilidad del mismo. El objetivo principal del presente Trabajo Fin de Máster es la optimización de mezclas multicomponente de líquidos iónicos para maximizar la eficiencia de procesos de separación y minimizar los costes de los mismos. La estructura de este problema es la de un problema de optimización no lineal con variables discretas y continuas, es decir, un problema de optimización MINLP (Mixed Integer Non-Linear Programming). Tal y como se verá posteriormente, el modelo matemático de este problema es no lineal. Por otra parte, las variables del mismo son tanto continuas como binarias. Las variables continuas se corresponden con las fracciones molares de los líquidos iónicos presentes en las mezclas y con el caudal de la mezcla de líquidos iónicos. Por otra parte, también se ha introducido un número de variables binarias igual al número de líquidos iónicos presentes en la mezcla. Cada una de estas variables multiplican a las fracciones molares de sus correspondientes líquidos iónicos, de manera que cuando dicha variable es igual a 1, el líquido se encuentra en la mezcla mientras que cuando dicha variable es igual a 0, el líquido iónico no se encuentra presente en dicha mezcla. El empleo de este tipo de variables obliga por tanto a emplear algoritmos para la resolución de problemas de optimización MINLP ya que si todas las variables fueran continuas, bastaría con el empleo de algoritmos para la resolución de problemas de optimización NLP (Non-Linear Programming). Se han probado por tanto diversos algoritmos presentes en el paquete OPTI Toolbox de MATLAB para comprobar cuál es el más adecuado para abordar este problema. Finalmente, una vez validado el código programado, se han optimizado diversas mezclas de líquidos iónicos para lograr la máxima recuperación de compuestos aromáticos en un proceso de absorción de mezclas orgánicas. También se ha usado este código para la minimización del coste correspondiente a la compra de los líquidos iónicos de la mezcla de disolventes empleada en la operación de absorción. En este caso ha sido necesaria la introducción de restricciones relativas a la recuperación de aromáticos en la fase líquida o a la pureza de la mezcla obtenida una vez separada la mezcla de líquidos iónicos. Se han modelizado los dos problemas descritos previamente (maximización de la recuperación de Benceno y minimización del coste de operación) empleando tanto únicamente variables continuas (correspondientes a las fracciones o cantidades molares de los líquidos iónicos) como variables continuas y binarias (correspondientes a cada uno de los líquidos iónicos implicados en las mezclas).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical programming can be used for the optimal design of shell-and-tube heat exchangers (STHEs). This paper proposes a mixed integer non-linear programming (MINLP) model for the design of STHEs, following rigorously the standards of the Tubular Exchanger Manufacturers Association (TEMA). Bell–Delaware Method is used for the shell-side calculations. This approach produces a large and non-convex model that cannot be solved to global optimality with the current state of the art solvers. Notwithstanding, it is proposed to perform a sequential optimization approach of partial objective targets through the division of the problem into sets of related equations that are easier to solve. For each one of these problems a heuristic objective function is selected based on the physical behavior of the problem. The global optimal solution of the original problem cannot be ensured even in the case in which each of the sub-problems is solved to global optimality, but at least a very good solution is always guaranteed. Three cases extracted from the literature were studied. The results showed that in all cases the values obtained using the proposed MINLP model containing multiple objective functions improved the values presented in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a novel kind of hybrid pigment based on nanoclays and dyes was synthesized and characterized. These nanoclay-based pigments (NCPs) were prepared at the laboratory with sodium montmorillonite nanoclay (NC) and methylene blue (MB). The cation-exchange capacity of NC exchanged with MB was varied to obtain a wide color gamut. The synthesized nanopigments were thoroughly characterized. The NCPs were melt-mixed with linear low-density polyethylene (PE) with an internal mixer. Furthermore, samples with conventional colorants were prepared in the same way. Then, the properties (mechanical, thermal, and colorimetric) of the mixtures were assessed. The PE–NCP samples developed better color properties than those containing conventional colorants and used as references, and their other properties were maintained or improved, even at lower contents of dye compared to that with the conventional colorants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiobjective Generalized Disjunctive Programming (MO-GDP) optimization has been used for the synthesis of an important industrial process, isobutane alkylation. The two objective functions to be simultaneously optimized are the environmental impact, determined by means of LCA (Life Cycle Assessment), and the economic potential of the process. The main reason for including the minimization of the environmental impact in the optimization process is the widespread environmental concern by the general public. For the resolution of the problem we employed a hybrid simulation- optimization methodology, i.e., the superstructure of the process was developed directly in a chemical process simulator connected to a state of the art optimizer. The model was formulated as a GDP and solved using a logic algorithm that avoids the reformulation as MINLP -Mixed Integer Non Linear Programming-. Our research gave us Pareto curves compounded by three different configurations where the LCA has been assessed by two different parameters: global warming potential and ecoindicator-99.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most widely studied protein structure prediction models is the hydrophobic-hydrophilic (HP) model, which explains the hydrophobic interaction and tries to maximize the number of contacts among hydrophobic amino-acids. In order to find a lower bound for the number of contacts, a number of heuristics have been proposed, but finding the optimal solution is still a challenge. In this research, we focus on creating a new integer programming model which is capable to provide tractable input for mixed-integer programming solvers, is general enough and allows relaxation with provable good upper bounds. Computational experiments using benchmark problems show that our formulation achieves these goals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 97D40, 97M10, 97M40, 97N60, 97N80, 97R80

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooperative communication has gained much interest due to its ability to exploit the broadcasting nature of the wireless medium to mitigate multipath fading. There has been considerable amount of research on how cooperative transmission can improve the performance of the network by focusing on the physical layer issues. During the past few years, the researchers have started to take into consideration cooperative transmission in routing and there has been a growing interest in designing and evaluating cooperative routing protocols. Most of the existing cooperative routing algorithms are designed to reduce the energy consumption; however, packet collision minimization using cooperative routing has not been addressed yet. This dissertation presents an optimization framework to minimize collision probability using cooperative routing in wireless sensor networks. More specifically, we develop a mathematical model and formulate the problem as a large-scale Mixed Integer Non-Linear Programming problem. We also propose a solution based on the branch and bound algorithm augmented with reducing the search space (branch and bound space reduction). The proposed strategy builds up the optimal routes from each source to the sink node by providing the best set of hops in each route, the best set of relays, and the optimal power allocation for the cooperative transmission links. To reduce the computational complexity, we propose two near optimal cooperative routing algorithms. In the first near optimal algorithm, we solve the problem by decoupling the optimal power allocation scheme from optimal route selection. Therefore, the problem is formulated by an Integer Non-Linear Programming, which is solved using a branch and bound space reduced method. In the second near optimal algorithm, the cooperative routing problem is solved by decoupling the transmission power and the relay node se- lection from the route selection. After solving the routing problems, the power allocation is applied in the selected route. Simulation results show the algorithms can significantly reduce the collision probability compared with existing cooperative routing schemes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.

In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.

Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.

I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and

discuss some implications for capital regulation policy and stress testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a joint location-inventory model is proposed that simultaneously optimises strategic supply chain design decisions such as facility location and customer allocation to facilities, and tactical-operational inventory management and production scheduling decisions. All this is analysed in a context of demand uncertainty and supply uncertainty. While demand uncertainty stems from potential fluctuations in customer demands over time, supply-side uncertainty is associated with the risk of “disruption” to which facilities may be subject. The latter is caused by external factors such as natural disasters, strikes, changes of ownership and information technology security incidents. The proposed model is formulated as a non-linear mixed integer programming problem to minimise the expected total cost, which includes four basic cost items: the fixed cost of locating facilities at candidate sites, the cost of transport from facilities to customers, the cost of working inventory, and the cost of safety stock. Next, since the optimisation problem is very complex and the number of evaluable instances is very low, a "matheuristic" solution is presented. This approach has a twofold objective: on the one hand, it considers a larger number of facilities and customers within the network in order to reproduce a supply chain configuration that more closely reflects a real-world context; on the other hand, it serves to generate a starting solution and perform a series of iterations to try to improve it. Thanks to this algorithm, it was possible to obtain a solution characterised by a lower total system cost than that observed for the initial solution. The study concludes with some reflections and the description of possible future insights.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, all publicly-accessible published findings on Alicyclobacillus acidoterrestris heat resistance in fruit beverages as affected by temperature and pH were compiled. Then, study characteristics (protocols, fruit and variety, °Brix, pH, temperature, heating medium, culture medium, inactivation method, strains, etc.) were extracted from the primary studies, and some of them incorporated to a meta-analysis mixed-effects linear model based on the basic Bigelow equation describing the heat resistance parameters of this bacterium. The model estimated mean D* values (time needed for one log reduction at a temperature of 95 °C and a pH of 3.5) of Alicyclobacillus in beverages of different fruits, two different concentration types, with and without bacteriocins, and with and without clarification. The zT (temperature change needed to cause one log reduction in D-values) estimated by the meta-analysis model were compared to those ('observed' zT values) reported in the primary studies, and in all cases they were within the confidence intervals of the model. The model was capable of predicting the heat resistance parameters of Alicyclobacillus in fruit beverages beyond the types available in the meta-analytical data. It is expected that the compilation of the thermal resistance of Alicyclobacillus in fruit beverages, carried out in this study, will be of utility to food quality managers in the determination or validation of the lethality of their current heat treatment processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The purpose of this study was to compare the electromyography index of muscle coactivation of the following muscle pairs: posterior deltoid and pectoralis major (PD/PM); triceps brachii and biceps brachii (TB/BB); and serratus anterior and upper trapezius (SA/UT) during three different closed kinetic chain exercises (wall-press, bench-press and push-up) on an unstable surface at the maximal load. Methods: A total of 20 healthy sedentary men participated in the study. Integral linear values were obtained from three sustained contractions of six seconds each for the three proposed exercises. Mean coactivation index values were compared using the mixed-effects linear model, with a five percent significance level. Results: Electromyography indexes of muscle coactivation showed significant differences for the PD/PM and TB/BB muscle pairs. No differences were found between exercises for the SA/UT muscle pair. Conclusion: Our results seem to differ from those of previous studies, which reported that the similarity in exercises performed is responsible for the comparable muscle activation levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate patellar kinematics of volunteers Without knee pain at rest and during isometric contraction in open- and closed-kinetic-chain exercises. Methods: Twenty individuals took part in this study. All were submitted to magnetic resonance imaging (MRI) during rest and voluntary isometric contraction (VIC) in the open anti closed kinetic chain at 15 degrees, 30 degrees, and 45 degrees of knee flexion. Through MRI and using medical e-film software, the following measurements were evaluated: sulcus angle, patellar-tilt angle, and bisect offset. The mixed-effects linear model was used for comparison between knee positions, between rest and isometric contractions, and between (he exercises. Results: Data analysis revealed that the sulcus angle decreased as knee flexion increased and revealed increases with isometric contractions in both the open and closed kinetic chain for all knee-flexion angles. The patellar-tilt angle decreased with isometric contractions in both the open and closed kinetic chain for every knee position. However, in the closed kinetic chain, patellar tilt increased significantly with the knee flexed at 15 degrees. The bisect offset increased with the knee flexed at 15 degrees during isometric contractions and decreased as knee flexion increased during both exercises. Conclusion: VIC in the last degrees of knee extension may compromise patellar dynamics. On the other hand, it is possible to favor patellar stability by performing muscle contractions with the knee flexed at 30 degrees and 45 degrees in either the open or closed kinetic chain.