947 resultados para linear programming applications
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.
Resumo:
This work presents a non-linear boundary element formulation applied to analysis of contact problems. The boundary element method (BEM) is known as a robust and accurate numerical technique to handle this type of problem, because the contact among the solids occurs along their boundaries. The proposed non-linear formulation is based on the use of singular or hyper-singular integral equations by BEM, for multi-region contact. When the contact occurs between crack surfaces, the formulation adopted is the dual version of BEM, in which singular and hyper-singular integral equations are defined along the opposite sides of the contact boundaries. The structural non-linear behaviour on the contact is considered using Coulomb`s friction law. The non-linear formulation is based on the tangent operator in which one uses the derivate of the set of algebraic equations to construct the corrections for the non-linear process. This implicit formulation has shown accurate as the classical approach, however, it is faster to compute the solution. Examples of simple and multi-region contact problems are shown to illustrate the applicability of the proposed scheme. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work deals with analysis of cracked structures using BEM. Two formulations to analyse the crack growth process in quasi-brittle materials are discussed. They are based on the dual formulation of BEM where two different integral equations are employed along the opposite sides of the crack surface. The first presented formulation uses the concept of constant operator, in which the corrections of the nonlinear process are made only by applying appropriate tractions along the crack surfaces. The second presented BEM formulation to analyse crack growth problems is an implicit technique based on the use of a consistent tangent operator. This formulation is accurate, stable and always requires much less iterations to reach the equilibrium within a given load increment in comparison with the classical approach. Comparison examples of classical problem of crack growth are shown to illustrate the performance of the two formulations. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This study presents a solid-like finite element formulation to solve geometric non-linear three-dimensional inhomogeneous frames. To achieve the desired representation, unconstrained vectors are used instead of the classic rigid director triad; as a consequence, the resulting formulation does not use finite rotation schemes. High order curved elements with any cross section are developed using a full three-dimensional constitutive elastic relation. Warping and variable thickness strain modes are introduced to avoid locking. The warping mode is solved numerically in FEM pre-processing computational code, which is coupled to the main program. The extra calculations are relatively small when the number of finite elements. with the same cross section, increases. The warping mode is based on a 2D free torsion (Saint-Venant) problem that considers inhomogeneous material. A scheme that automatically generates shape functions and its derivatives allow the use of any degree of approximation for the developed frame element. General examples are solved to check the objectivity, path independence, locking free behavior, generality and accuracy of the proposed formulation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper presents results on a verification test of a Direct Numerical Simulation code of mixed high-order of accuracy using the method of manufactured solutions (MMS). This test is based on the formulation of an analytical solution for the Navier-Stokes equations modified by the addition of a source term. The present numerical code was aimed at simulating the temporal evolution of instability waves in a plane Poiseuille flow. The governing equations were solved in a vorticity-velocity formulation for a two-dimensional incompressible flow. The code employed two different numerical schemes. One used mixed high-order compact and non-compact finite-differences from fourth-order to sixth-order of accuracy. The other scheme used spectral methods instead of finite-difference methods for the streamwise direction, which was periodic. In the present test, particular attention was paid to the boundary conditions of the physical problem of interest. Indeed, the verification procedure using MMS can be more demanding than the often used comparison with Linear Stability Theory. That is particularly because in the latter test no attention is paid to the nonlinear terms. For the present verification test, it was possible to manufacture an analytical solution that reproduced some aspects of an instability wave in a nonlinear stage. Although the results of the verification by MMS for this mixed-order numerical scheme had to be interpreted with care, the test was very useful as it gave confidence that the code was free of programming errors. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Blends of canola oil (CO) and fully hydrogenated cottonseed oil (FHCSO), with 20, 25, 30, 35 and 40% FHCSO (w/w) were interesterified under the following conditions: 0.4% sodium methoxide, 500 rpm stirring, 100C, 20 min. The original and interesterified blends were examined for triacylglycerol composition, melting point, solid fat content (SFC) and consistency. Interesterification caused considerable rearrangement of triacylglycerol species, reduction of trisaturated triacylglycerol content and increase in disaturated-monounsaturated and monosaturated-diunsaturated triacylglycerols in all blends, resulting in lowering of respective melting points. The interesterified blends showed reduced SFC at all temperatures and more linear melting profiles if compared with the original blends. Consistency, expressed as yield value, significantly decreased after the reaction. Iso-solid curves indicated eutectic interactions for the original blends, which were eliminated after randomization. The 80:20, 75:25, 70:30 and 65:35 (w/w) CO: FHCSO interesterified blends showed characteristics which are appropriate for their application as soft margarines, spreads, fat for bakery/all-purpose shortenings, and icing shortenings, respectively. PRACTICAL APPLICATIONS Recently, a number of studies have suggested a direct relationship between trans isomers and increased risk of vascular disease. In response, many health organizations have recommended reducing consumption of foods containing trans fatty acids. In this connection, chemical interesterification has proven the main alternative for obtaining plastic fats that have low trans isomer content or are even trans isomer free. This work proposes to evaluate the chemical interesterification of binary blends of canola oil and fully hydrogenated cottonseed oil and the specific potential application of these interesterified blends in food products.
Resumo:
Blends of soybean oil (50) and fully hydrogenated soybean oil (FHSBO), with 10%, 20%, 30%, 40% and 50% FHSBO (w/w) content were interesterified under the following conditions: 0.4% sodium methoxide, 500 rpm stirring, 100 degrees C, 20 min. The original and interesterified blends were examined for triacylglycerol composition, melting point, solid fat content (SFC) and consistency. Interesterification caused considerable rearrangement of triacylglycerol species, reduction of trisaturated triacylglycerol content and increase in monounsaturated and diunsaturated triacylglycerols, resulting in lowering of respective melting points. The interesterified blends displayed reduced SFC at all temperatures and more linear melting profiles as compared with the original blends. Yield values showed increased plasticity in the blends after the reaction. Isosolid diagrams before and after the reaction showed no eutectic interactions. The 90:10, 80:20, 70:30 and 60:40 interesterified SO:FHSBO blends displayed characteristics suited to application, respectively, as liquid shortening, table margarine, baking/confectionery fat and all-purpose shortenings/biscuit-filing base. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper is devoted to the study of the class of continuous and bounded functions f : [0, infinity] -> X for which exists omega > 0 such that lim(t ->infinity) (f (t + omega) - f (t)) = 0 (in the sequel called S-asymptotically omega-periodic functions). We discuss qualitative properties and establish some relationships between this type of functions and the class of asymptotically omega-periodic functions. We also study the existence of S-asymptotically omega-periodic mild solutions of the first-order abstract Cauchy problem in Banach spaces. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
Objective. The purpose of this research was to provide further evidence to demonstrate the precision and accuracy of maxillofacial linear and angular measurements obtained by cone-beam computed tomography (CBCT) images. Study design. The study population consisted of 15 dry human skulls that were submitted to CBCT, and 3-dimensional (3D) images were generated. Linear and angular measurements based on conventional craniometric anatomical landmarks, and were identified in 3D-CBCT images by 2 radiologists twice each independently. Subsequently, physical measurements were made by a third examiner using a digital caliper and a digital goniometer. Results. The results demonstrated no statistically significant difference between inter-and intra-examiner analysis. Regarding accuracy test, no statistically significant differences were found of the comparison between the physical and CBCT-based linear and angular measurements for both examiners (P = .968 and .915, P = .844 and .700, respectively). Conclusions. 3D-CBCT images can be used to obtain dimensionally accurate linear and angular measurements from bony maxillofacial structures and landmarks. (Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2009; 108: 430-436)
Resumo:
Loss networks have long been used to model various types of telecommunication network, including circuit-switched networks. Such networks often use admission controls, such as trunk reservation, to optimize revenue or stabilize the behaviour of the network. Unfortunately, an exact analysis of such networks is not usually possible, and reduced-load approximations such as the Erlang Fixed Point (EFP) approximation have been widely used. The performance of these approximations is typically very good for networks without controls, under several regimes. There is evidence, however, that in networks with controls, these approximations will in general perform less well. We propose an extension to the EFP approximation that gives marked improvement for a simple ring-shaped network with trunk reservation. It is based on the idea of considering pairs of links together, thus making greater allowance for dependencies between neighbouring links than does the EFP approximation, which only considers links in isolation.