956 resultados para Interior point methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new approach called the Modified Barrier Lagrangian Function (MBLF) to solve the Optimal Reactive Power Flow problem is presented. In this approach, the inequality constraints are treated by the Modified Barrier Function (MBF) method, which has a finite convergence property: i.e. the optimal solution in the MBF method can actually be in the bound of the feasible set. Hence, the inequality constraints can be precisely equal to zero. Another property of the MBF method is that the barrier parameter does not need to be driven to zero to attain the solution. Therefore, the conditioning of the involved Hessian matrix is greatly enhanced. In order to show this, a comparative analysis of the numeric conditioning of the Hessian matrix of the MBLF approach, by the decomposition in singular values, is carried out. The feasibility of the proposed approach is also demonstrated with comparative tests to Interior Point Method (IPM) using various IEEE test systems and two networks derived from Brazilian generation/transmission system. The results show that the MBLF method is computationally more attractive than the IPM in terms of speed, number of iterations and numerical conditioning. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is an account of some aspects of the geometry of Kahler affine metrics based on considering them as smooth metric measure spaces and applying the comparison geometry of Bakry-Emery Ricci tensors. Such techniques yield a version for Kahler affine metrics of Yau s Schwarz lemma for volume forms. By a theorem of Cheng and Yau, there is a canonical Kahler affine Einstein metric on a proper convex domain, and the Schwarz lemma gives a direct proof of its uniqueness up to homothety. The potential for this metric is a function canonically associated to the cone, characterized by the property that its level sets are hyperbolic affine spheres foliating the cone. It is shown that for an n -dimensional cone, a rescaling of the canonical potential is an n -normal barrier function in the sense of interior point methods for conic programming. It is explained also how to construct from the canonical potential Monge-Ampère metrics of both Riemannian and Lorentzian signatures, and a mean curvature zero conical Lagrangian submanifold of the flat para-Kahler space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, an abstract framework for the error analysis of discontinuous Galerkin methods for control constrained optimal control problems is developed. The analysis establishes the best approximation result from a priori analysis point of view and delivers a reliable and efficient a posteriori error estimator. The results are applicable to a variety of problems just under the minimal regularity possessed by the well-posedness of the problem. Subsequently, the applications of C-0 interior penalty methods for a boundary control problem as well as a distributed control problem governed by the biharmonic equation subject to simply supported boundary conditions are discussed through the abstract analysis. Numerical experiments illustrate the theoretical findings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two different definitions, one is potential based and the other is charge based, are used in the literatures to define the threshold voltage of undoped body symmetric double gate transistors. This paper, by introducing a novel concept of crossover point, proves that the charge based definition is more accurate than the potential based definition. It is shown that for a given channel length the potential based definition predicts anomalous change in threshold voltage with body thickness variation while the charge based definition results in monotonous change. The threshold voltage is then extracted from drain current versus gate voltage characteristics using linear extrapolation, transconductance and match-point methods. In all the three cases it is found that trend of threshold voltage variation support the charge based definition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We have presented a new low dissipative kinetic scheme based on a modified Courant Splitting of the molecular velocity through a parameter φ. Conditions for the split fluxes derived based on equilibrium determine φ for a one point shock. It turns out that φ is a function of the Left and Right states to the shock and that these states should satisfy the Rankine-Hugoniot Jump condition. Hence φ is utilized in regions where the gradients are sufficiently high, and is switched to unity in smooth regions. Numerical results confirm a discrete shock structure with a single interior point when the shock is aligned with the grid.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a lower bound limit analysis approach for solving an axisymmetric stability problem by using the Drucker-Prager (D-P) yield cone in conjunction with finite elements and nonlinear optimization. In principal stress space, the tip of the yield cone has been smoothened by applying the hyperbolic approximation. The nonlinear optimization has been performed by employing an interior point method based on the logarithmic barrier function. A new proposal has also been given to simulate the D-P yield cone with the Mohr-Coulomb hexagonal yield pyramid. For the sake of illustration, bearing capacity factors N-c, N-q and N-gamma have been computed, as a function of phi, both for smooth and rough circular foundations. The results obtained from the analysis compare quite well with the solutions reported from literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Métodos de otimização que utilizam condições de otimalidade de primeira e/ou segunda ordem são conhecidos por serem eficientes. Comumente, esses métodos iterativos são desenvolvidos e analisados à luz da análise matemática do espaço euclidiano n-dimensional, cuja natureza é de caráter local. Consequentemente, esses métodos levam a algoritmos iterativos que executam apenas as buscas locais. Assim, a aplicação de tais algoritmos para o cálculo de minimizadores globais de uma função não linear,especialmente não-convexas e multimodais, depende fortemente da localização dos pontos de partida. O método de Otimização Global Topográfico é um algoritmo de agrupamento, que utiliza uma abordagem baseada em conceitos elementares da teoria dos grafos, a fim de gerar bons pontos de partida para os métodos de busca local, a partir de pontos distribuídos de modo uniforme no interior da região viável. Este trabalho tem dois objetivos. O primeiro é realizar uma nova abordagem sobre método de Otimização Global Topográfica, onde, pela primeira vez, seus fundamentos são formalmente descritos e suas propriedades básicas são matematicamente comprovadas. Neste contexto, propõe-se uma fórmula semi-empírica para calcular o parâmetro chave deste algoritmo de agrupamento, e, usando um método robusto e eficiente de direções viáveis por pontos-interiores, estendemos o uso do método de Otimização Global Topográfica a problemas com restrições de desigualdade. O segundo objetivo é a aplicação deste método para a análise de estabilidade de fase em misturas termodinâmicas,o qual consiste em determinar se uma dada mistura se apresenta em uma ou mais fases. A solução deste problema de otimização global é necessária para o cálculo do equilíbrio de fases, que é um problema de grande importância em processos da engenharia, como, por exemplo, na separação por destilação, em processos de extração e simulação da recuperação terciária de petróleo, entre outros. Além disso, afim de ter uma avaliação inicial do potencial dessa técnica, primeiro vamos resolver 70 problemas testes, e então comparar o desempenho do método proposto aqui com o solver MIDACO, um poderoso software recentemente introduzido no campo da otimização global.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Alternative and more efficient computational methods can extend the applicability of model predictive control (MPC) to systems with tight real-time requirements. This paper presents a system-on-a-chip MPC system, implemented on a field-programmable gate array (FPGA), consisting of a sparse structure-exploiting primal dual interior point (PDIP) quadratic program (QP) solver for MPC reference tracking and a fast gradient QP solver for steady-state target calculation. A parallel reduced precision iterative solver is used to accelerate the solution of the set of linear equations forming the computational bottleneck of the PDIP algorithm. A numerical study of the effect of reducing the number of iterations highlights the effectiveness of the approach. The system is demonstrated with an FPGA-in-the-loop testbench controlling a nonlinear simulation of a large airliner. This paper considers many more manipulated inputs than any previous FPGA-based MPC implementation to date, yet the implementation comfortably fits into a midrange FPGA, and the controller compares well in terms of solution quality and latency to state-of-the-art QP solvers running on a standard PC. © 1993-2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The majority of existing application profiling techniques ag- gregate and report performance costs by method or call- ing context. Modern large-scale object-oriented applications consist of thousands of methods with complex calling pat- terns. Consequently, when profiled, their performance costs tend to be thinly distributed across many thousands of loca- tions with few easily identifiable optimisation opportunities. However experienced performance engineers know that there are repeated patterns of method calls in the execution of an application that are induced by the libraries, design patterns and coding idioms used in the software. Automati- cally identifying and aggregating costs over these patterns of method calls allows us to identify opportunities to improve performance based on optimising these patterns. We have developed an analysis technique that is able to identify the entry point methods, which we call subsuming methods, of such patterns. Our ofiine analysis runs over previously collected runtime performance data structured in a calling context tree, such as produced by a large number of existing commercial and open source profilers. We have evaluated our approach on the DaCapo bench- mark suite, showing that our analysis significantly reduces the size and complexity of the runtime performance data set, facilitating its comprehension and interpretation. We also demonstrate, with a collection of case studies, that our analysis identifies new optimisation opportunities that can lead to significant performance improvements (from 20% to over 50% improvement in our case studies).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An efficient heuristic algorithm is presented in this work in order to solve the optimal capacitor placement problem in radial distribution systems. The proposal uses the solution from the mathematical model after relaxing the integrality of the discrete variables as a strategy to identify the most attractive bus to add capacitors to each step of the heuristic algorithm. The relaxed mathematical model is a nonlinear programming problem and is solved using a specialized interior point method, The algorithm still incorporates an additional strategy of local search that enables the finding of a group of quality solutions after small alterations in the optimization strategy. Proposed solution methodology has been implemented and tested in known electric systems getting a satisfactory outcome compared with metaheuristic methods.The tests carried out in electric systems known in specialized literature reveal the satisfactory outcome of the proposed algorithm compared with metaheuristic methods. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes a method for the decentralized solution of the optimal reactive power flow (ORPF) problem in interconnected power systems. The ORPF model is solved in a decentralized framework, consisting of regions, where the transmission system operator in each area operates its system independently of the other areas, obtaining an optimal coordinated but decentralized solution. The proposed scheme is based on an augmented Lagrangian approach using the auxiliary problem principle (APP). An implementation of an interior point method is described to solve the decoupled problem in each area. The described method is successfully implemented and tested using the IEEE two area RTS 96 test system. Numerical results comparing the solutions obtained by the traditional and the proposed decentralized methods are presented for validation. ©2008 IEEE.