990 resultados para Interior Points Methods
Resumo:
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper presents studies of cases in power systems by Sensitivity Analysis (SA) oriented by Optimal Power Flow (OPF) problems in different operation scenarios. The studies of cases start from a known optimal solution obtained by OPF. This optimal solution is called base case, and from this solution new operation points may be evaluated by SA when perturbations occur in the system. The SA is based on Fiacco`s Theorem and has the advantage of not be an iterative process. In order to show the good performance of the proposed technique tests were carried out on the IEEE 14, 118 and 300 buses systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
In this paper, short term hydroelectric scheduling is formulated as a network flow optimization model and solved by interior point methods. The primal-dual and predictor-corrector versions of such interior point methods are developed and the resulting matrix structure is explored. This structure leads to very fast iterations since it avoids computation and factorization of impedance matrices. For each time interval, the linear algebra reduces to the solution of two linear systems, either to the number of buses or to the number of independent loops. Either matrix is invariant and can be factored off-line. As a consequence of such matrix manipulations, a linear system which changes at each iteration has to be solved, although its size is reduced to the number of generating units and is not a function of time intervals. These methods were applied to IEEE and Brazilian power systems, and numerical results were obtained using a MATLAB implementation. Both interior point methods proved to be robust and achieved fast convergence for all instances tested. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an adaptation of the dual-affine interior point method for the surface flatness problem. In order to determine how flat a surface is, one should find two parallel planes so that the surface is between them and they are as close together as possible. This problem is equivalent to the problem of solving inconsistent linear systems in terms of Tchebyshev's norm. An algorithm is proposed and results are presented and compared with others published in the literature. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
In this paper a method for solving the Short Term Transmission Network Expansion Planning (STTNEP) problem is presented. The STTNEP is a very complex mixed integer nonlinear programming problem that presents a combinatorial explosion in the search space. In this work we present a constructive heuristic algorithm to find a solution of the STTNEP of excellent quality. In each step of the algorithm a sensitivity index is used to add a circuit (transmission line or transformer) to the system. This sensitivity index is obtained solving the STTNEP problem considering as a continuous variable the number of circuits to be added (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an interior points method that uses a combination of the multiple predictor corrector and multiple centrality corrections methods, both belonging to the family of higher order interior points method (HOIPM). Tests were carried out using a modified Carver system and the results presented show the good performance of both the constructive heuristic algorithm to solve the STTNEP problem and the HOIPM used in each step.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
In this paper we consider the a posteriori and a priori error analysis of discontinuous Galerkin interior penalty methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes. In particular, we discuss the question of error estimation for linear target functionals, such as the outflow flux and the local average of the solution. Based on our a posteriori error bound we design and implement the corresponding adaptive algorithm to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement. The theoretical results are illustrated by a series of numerical experiments.
Resumo:
A theta graph is a graph consisting of three pairwise internally disjoint paths with common end points. Methods for decomposing the complete graph K-nu into theta graphs with fewer than ten edges are given.
Resumo:
The main goal of this work is to solve mathematical program with complementarity constraints (MPCC) using nonlinear programming techniques (NLP). An hyperbolic penalty function is used to solve MPCC problems by including the complementarity constraints in the penalty term. This penalty function [1] is twice continuously differentiable and combines features of both exterior and interior penalty methods. A set of AMPL problems from MacMPEC [2] are tested and a comparative study is performed.
Resumo:
Abstract:INTRODUCTION:The Montenegro skin test (MST) has good clinical applicability and low cost for the diagnosis of American tegumentary leishmaniasis (ATL). However, no studies have validated the reference value (5mm) typically used to discriminate positive and negative results. We investigated MST results and evaluated its performance using different cut-off points.METHODS:The results of laboratory tests for 4,256 patients with suspected ATL were analyzed, and 1,182 individuals were found to fulfill the established criteria. Two groups were formed. The positive cutaneous leishmaniasis (PCL) group included patients with skin lesions and positive direct search for parasites (DS) results. The negative cutaneous leishmaniasis (NCL) group included patients with skin lesions with evolution up to 2 months, negative DS results, and negative indirect immunofluorescence assay results who were residents of urban areas that were reported to be probable sites of infection at domiciles and peridomiciles.RESULTS:The PCL and NCL groups included 769 and 413 individuals, respectively. The mean ± standard deviation MST in the PCL group was 12.62 ± 5.91mm [95% confidence interval (CI): 12.20-13.04], and that in the NCL group was 1.43 ± 2.17mm (95% CI: 1.23-1.63). Receiver-operating characteristic curve analysis indicated 97.4% sensitivity and 93.9% specificity for a cut-off of 5mm and 95.8% sensitivity and 97.1% specificity for a cut-off of 6mm.CONCLUSIONS:Either 5mm or 6mm could be used as the cut-off value for diagnosing ATL, as both values had high sensitivity and specificity.
Resumo:
BACKGROUND: In vitro aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system-specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration-dependent neurotoxicity is determined at several time points. METHODS: A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration-dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time-concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death. RESULTS: The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 μM reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours. CONCLUSION: The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time-consuming long-term experiments could possibly be eliminated.
Resumo:
Purpose: To investigate the effect of incremental increases in intraocular straylight on threshold measurements made by three modern forms of perimetry: Standard Automated Perimetry (SAP) using Octopus (Dynamic, G-Pattern), Pulsar Perimetry (PP) (TOP, 66 points) and the Moorfields Motion Displacement Test (MDT) (WEBS, 32 points).Methods: Four healthy young observers were recruited (mean age 26yrs [25yrs, 28yrs]), refractive correction [+2 D, -4.25D]). Five white opacity filters (WOF), each scattering light by different amounts were used to create incremental increases in intraocular straylight (IS). Resultant IS values were measured with each WOF and at baseline (no WOF) for each subject using a C-Quant Straylight Meter (Oculus, Wetzlar, Germany). A 25 yr old has an IS value of ~0.85 log(s). An increase of 40% in IS to 1.2log(s) corresponds to the physiological value of a 70yr old. Each WOFs created an increase in IS between 10-150% from baseline, ranging from effects similar to normal aging to those found with considerable cataract. Each subject underwent 6 test sessions over a 2-week period; each session consisted of the 3 perimetric tests using one of the five WOFs and baseline (both instrument and filter were randomised).Results: The reduction in sensitivity from baseline was calculated. A two-way ANOVA on mean change in threshold (where subjects were treated as rows in the block and each increment in fog filters was treated as column) was used to examine the effect of incremental increases in straylight. Both SAP (p<0.001) and Pulsar (p<0.001) were significantly affected by increases in straylight. The MDT (p=0.35) remained comparatively robust to increases in straylight.Conclusions: The Moorfields MDT measurement of threshold is robust to effects of additional straylight as compared to SAP and PP.