11 resultados para Interior point methods

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new approach called the Modified Barrier Lagrangian Function (MBLF) to solve the Optimal Reactive Power Flow problem is presented. In this approach, the inequality constraints are treated by the Modified Barrier Function (MBF) method, which has a finite convergence property: i.e. the optimal solution in the MBF method can actually be in the bound of the feasible set. Hence, the inequality constraints can be precisely equal to zero. Another property of the MBF method is that the barrier parameter does not need to be driven to zero to attain the solution. Therefore, the conditioning of the involved Hessian matrix is greatly enhanced. In order to show this, a comparative analysis of the numeric conditioning of the Hessian matrix of the MBLF approach, by the decomposition in singular values, is carried out. The feasibility of the proposed approach is also demonstrated with comparative tests to Interior Point Method (IPM) using various IEEE test systems and two networks derived from Brazilian generation/transmission system. The results show that the MBLF method is computationally more attractive than the IPM in terms of speed, number of iterations and numerical conditioning. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To compare, in vivo, the accuracy of conventional and digital radiographic methods in determining root canal working length. Material and Methods: Twenty-five maxillary incisor or canine teeth from 22 patients were used in this study. Considering the preoperative radiographs as the baseline, a 25 K file was inserted into the root canal to the point where the Root ZX electronic apex locator indicated the APEX measurement in the screen. From this measurement, 1 mm was subtracted for positioning the file. The radiographic measurements were made using a digital sensor (Digora 1.51) or conventional type-E films, size 2, following the paralleling technique, to determine the distance of the file tip and the radiographic apex. Results: The Student "t" test indicated mean distances of 1.11 mm to conventional and 1.20 mm for the digital method and indicated a significant statistical difference (p<0.05). Conclusions: The conventional radiographic method was found to be superior to the digital one in determining the working length of the root canal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A thorough search of the sky exposed at the Pierre Auger Cosmic Ray Observatory reveals no statistically significant excess of events in any small solid angle that would be indicative of a flux of neutral particles from a discrete source. The search covers from -90 degrees to +15 degrees in declination using four different energy ranges above 1 EeV (10(18) eV). The method used in this search is more sensitive to neutrons than to photons. The upper limit on a neutron flux is derived for a dense grid of directions for each of the four energy ranges. These results constrain scenarios for the production of ultrahigh energy cosmic rays in the Galaxy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud point extraction (CPE) was employed for separation and preconcentration prior to the determination of nickel by graphite furnace atomic absorption spectrometry (GFAAS), flame atomic absorption spectrometry (FAAS) or UV-Vis spectrophotometry. Di-2-pyridyl ketone salicyloylhydrazone (DPKSH) was used for the first time as a complexing agent in CPE. The nickel complex was extracted from the aqueous phase using the Triton X-114 surfactant. Under optimized conditions, limits of detection obtained with GFAAS, FAAS and UV-Vis spectrophotometry were 0.14, 0.76 and 1.5 mu g L-1, respectively. The extraction was quantitative and the enrichment factor was estimated to be 27. The method was applied to natural waters, hemodialysis concentrates, urine and honey samples. Accuracy was evaluated by analysis of the NIST 1643e Water standard reference material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to rapid and continuous deforestation, recent bird surveys in the Atlantic Forest are following rapid assessment programs to accumulate significant amounts of data during short periods of time. During this study, two surveying methods were used to evaluate which technique rapidly accumulated most species (> 90% of the estimated empirical value) at lowland Atlantic Forests in the state of São Paulo, southeastern Brazil. Birds were counted during the 2008-2010 breeding seasons using 10-minute point counts and 10-species lists. Overall, point counting detected as many species as lists (79 vs. 83, respectively), and 88 points (14.7 h) detected 90% of the estimated species richness. Forty-one lists were insufficient to detect 90% of all species. However, lists accumulated species faster in a shorter time period, probably due to the nature of the point count method in which species detected while moving between points are not considered. Rapid assessment programs in these forests will rapidly detect more species using 10-species lists. Both methods shared 63% of all forest species, but this may be due to spatial and temporal mismatch between samplings of each method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seaweeds are photosynthetic organisms important to their ecosystem and constitute a source of compounds with several different applications in the pharmaceutical, cosmetic and biotechnology industries, such as triacylglycerols, which can be converted to fatty acid methyl esters that make up biodiesel, an alternative source of fuel applied in economic important areas. This study evaluates the fatty acid profiles and concentrations of three Brazilian seaweed species, Hypnea musciformis (Wulfen) J.V. Lamouroux (Rhodophya), Sargassum cymosum C. Agardh (Heterokontophyta), and Ulva lactuca L. (Chlorophyta), comparing three extraction methods (Bligh & Dyer - B&D; AOAC Official Methods - AOM; and extraction with methanol and ultrasound - EMU) and two transesterification methods (7% BF3 in methanol - BF3; and 5% HCl in methanol - HCl). The fatty acid contents of the three species of seaweeds were significantly different when extracted and transesterified by the different methods. Moreover, the best method for one species was not the same for the other species. The best extraction and transesterification methods for H. musciformis, S. cymosum and U. lactuca were, respectively, AOM-HCl, B&D-BF3 and B&D-BF3/B&D-HCl. These results point to a matrix effect and the method used for the analysis of the fatty acid content of different organisms should be selected carefully.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.