890 resultados para Solving-problem algorithms


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Piecewise-Linear Programming (PLP) is an important area of Mathematical Programming and concerns the minimisation of a convex separable piecewise-linear objective function, subject to linear constraints. In this paper a subarea of PLP called Network Piecewise-Linear Programming (NPLP) is explored. The paper presents four specialised algorithms for NPLP: (Strongly Feasible) Primal Simplex, Dual Method, Out-of-Kilter and (Strongly Polynomial) Cost-Scaling and their relative efficiency is studied. A statistically designed experiment is used to perform a computational comparison of the algorithms. The response variable observed in the experiment is the CPU time to solve randomly generated network piecewise-linear problems classified according to problem class (Transportation, Transshipment and Circulation), problem size, extent of capacitation, and number of breakpoints per arc. Results and conclusions on performance of the algorithms are reported.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, the concept of Matching Parallelepiped (MP) is presented. It is shown that the volume of the MP can be used as an additional measure of `distance' between a pair of candidate points in a matching algorithm by Relaxation Labeling (RL). The volume of the MP is related with the Epipolar Geometry and the use of this measure works as an epipolar constraint in a RL process, decreasing the efforts in the matching algorithm since it is not necessary to explicitly determine the equations of the epipolar lines and to compute the distance of a candidate point to each epipolar line. As at the beginning of the process the Relative Orientation (RO) parameters are unknown, a initial matching based on gradient, intensities and correlation is obtained. Based on this set of labeled points the RO is determined and the epipolar constraint included in the algorithm. The obtained results shown that the proposed approach is suitable to determine feature-point matching with simultaneous estimation of camera orientation parameters even for the cases where the pair of optical axes are not parallel.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Minimization of a differentiable function subject to box constraints is proposed as a strategy to solve the generalized nonlinear complementarity problem (GNCP) defined on a polyhedral cone. It is not necessary to calculate projections that complicate and sometimes even disable the implementation of algorithms for solving these kinds of problems. Theoretical results that relate stationary points of the function that is minimized to the solutions of the GNCP are presented. Perturbations of the GNCP are also considered, and results are obtained related to the resolution of GNCPs with very general assumptions on the data. These theoretical results show that local methods for box-constrained optimization applied to the associated problem are efficient tools for solving the GNCP. Numerical experiments are presented that encourage the use of this approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An analysis of the performances of three important methods for generators and loads loss allocation is presented. The discussed methods are: based on pro-rata technique; based on the incremental technique; and based on matrices of circuit. The algorithms are tested considering different generation conditions, using a known electric power system: IEEE 14 bus. Presented and discussed results verify: the location and the magnitude of generators and loads; the possibility to have agents well or poorly located in each network configuration; the discriminatory behavior considering variations in the power flow in the transmission lines. © 2004 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A branch and bound algorithm is proposed to solve the [image omitted]-norm model reduction problem for continuous and discrete-time linear systems, with convergence to the global optimum in a finite time. The lower and upper bounds in the optimization procedure are described by linear matrix inequalities (LMI). Also proposed are two methods with which to reduce the convergence time of the branch and bound algorithm: the first one uses the Hankel singular values as a sufficient condition to stop the algorithm, providing to the method a fast convergence to the global optimum. The second one assumes that the reduced model is in the controllable or observable canonical form. The [image omitted]-norm of the error between the original model and the reduced model is considered. Examples illustrate the application of the proposed method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Until mid 2006, SCIAMACHY data processors for the operational retrieval of nitrogen dioxide (NO2) column data were based on the historical version 2 of the GOME Data Processor (GDP). On top of known problems inherent to GDP 2, ground-based validations of SCIAMACHY NO2 data revealed issues specific to SCIAMACHY, like a large cloud-dependent offset occurring at Northern latitudes. In 2006, the GDOAS prototype algorithm of the improved GDP version 4 was transferred to the off-line SCIAMACHY Ground Processor (SGP) version 3.0. In parallel, the calibration of SCIAMACHY radiometric data was upgraded. Before operational switch-on of SGP 3.0 and public release of upgraded SCIAMACHY NO2 data, we have investigated the accuracy of the algorithm transfer: (a) by checking the consistency of SGP 3.0 with prototype algorithms; and (b) by comparing SGP 3.0 NO2 data with ground-based observations reported by the WMO/GAW NDACC network of UV-visible DOAS/SAOZ spectrometers. This delta-validation study concludes that SGP 3.0 is a significant improvement with respect to the previous processor IPF 5.04. For three particular SCIAMACHY states, the study reveals unexplained features in the slant columns and air mass factors, although the quantitative impact on SGP 3.0 vertical columns is not significant.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present paper evaluates meta-heuristic approaches to solve a soft drink industry problem. This problem is motivated by a real situation found in soft drink companies, where the lot sizing and scheduling of raw materials in tanks and products in lines must be simultaneously determined. Tabu search, threshold accepting and genetic algorithms are used as procedures to solve the problem at hand. The methods are evaluated with a set of instance already available for this problem. This paper also proposes a new set of complex instances. The computational results comparing these approaches are reported. © 2008 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper studies the use of different population structures in a Genetic Algorithm (GA) applied to lot sizing and scheduling problems. The population approaches are divided into two types: single-population and multi-population. The first type has a non-structured single population. The multi-population type presents non-structured and structured populations organized in binary and ternary trees. Each population approach is tested on lot sizing and scheduling problems found in soft drink companies. These problems have two interdependent levels with decisions concerning raw material storage and soft drink bottling. The challenge is to simultaneously determine the lot sizing and scheduling of raw materials in tanks and products in lines. Computational results are reported allowing determining the better population structure for the set of problem instances evaluated. Copyright 2008 ACM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper a framework based on the decomposition of the first-order optimality conditions is described and applied to solve the Probabilistic Power Flow (PPF) problem in a coordinated but decentralized way in the context of multi-area power systems. The purpose of the decomposition framework is to solve the problem through a process of solving smaller subproblems, associated with each area of the power system, iteratively. This strategy allows the probabilistic analysis of the variables of interest, in a particular area, without explicit knowledge of network data of the other interconnected areas, being only necessary to exchange border information related to the tie-lines between areas. An efficient method for probabilistic analysis, considering uncertainty in n system loads, is applied. The proposal is to use a particular case of the point estimate method, known as Two-Point Estimate Method (TPM), rather than the traditional approach based on Monte Carlo simulation. The main feature of the TPM is that it only requires resolve 2n power flows for to obtain the behavior of any random variable. An iterative coordination algorithm between areas is also presented. This algorithm solves the Multi-Area PPF problem in a decentralized way, ensures the independent operation of each area and integrates the decomposition framework and the TPM appropriately. The IEEE RTS-96 system is used in order to show the operation and effectiveness of the proposed approach and the Monte Carlo simulations are used to validation of the results. © 2011 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a new strategy to reduce the combinatorial search space of a mixed integer linear programming (MILP) problem. The construction phase of greedy randomized adaptive search procedure (GRASP-CP) is employed to reduce the domain of the integer variables of the transportation model of the transmission expansion planning (TM-TEP) problem. This problem is a MILP and very difficult to solve specially for large scale systems. The branch and bound (BB) algorithm is used to solve the problem in both full and the reduced search space. The proposed method might be useful to reduce the search space of those kinds of MILP problems that a fast heuristic algorithm is available for finding local optimal solutions. The obtained results using some real test systems show the efficiency of the proposed method. © 2012 Springer-Verlag.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of recurrent neural networks for solving the N-Queens problem. More specifically, a modified Hopfield network is developed and its internal parameters are explicitly computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points, which represent a solution of the considered problem. The network is shown to be completely stable and globally convergent to the solutions of the N-Queens problem. A fuzzy logic controller is also incorporated in the network to minimize convergence time. Simulation results are presented to validate the proposed approach.