972 resultados para nonlinear optimization
Resumo:
A constructive heuristic algorithm (CHA) to solve distribution system planning (DSP) problem is presented. The DSP is a very complex mixed binary nonlinear programming problem. A CHA is aimed at obtaining an excellent quality solution for the DSP problem. However, a local improvement phase and a branching technique were implemented in the CHA to improve its solution. In each step of the CHA, a sensitivity index is used to add a circuit or a substation to the distribution system. This sensitivity index is obtained by solving the DSP problem considering the numbers of circuits and substations to be added as continuous variables (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an efficient nonlinear optimization solver. Results of two tests systems and one real distribution system are presented in this paper in order to show the ability of the proposed algorithm.
Resumo:
This work presents the application of a multiobjective evolutionary algorithm (MOEA) for optimal power flow (OPF) solution. The OPF is modeled as a constrained nonlinear optimization problem, non-convex of large-scale, with continuous and discrete variables. The violated inequality constraints are treated as objective function of the problem. This strategy allows attending the physical and operational restrictions without compromise the quality of the found solutions. The developed MOEA is based on the theory of Pareto and employs a diversity-preserving mechanism to overcome the premature convergence of algorithm and local optimal solutions. Fuzzy set theory is employed to extract the best compromises of the Pareto set. Results for the IEEE-30, RTS-96 and IEEE-354 test systems are presents to validate the efficiency of proposed model and solution technique.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
An optimization technique to solve distribution network planning (DNP) problem is presented. This is a very complex mixed binary nonlinear programming problem. A constructive heuristic algorithm (CHA) aimed at obtaining an excellent quality solution for this problem is presented. In each step of the CHA, a sensitivity index is used to add a circuit or a substation to the distribution network. This sensitivity index is obtained solving the DNP problem considering the numbers of circuits and substations to be added as continuous variables (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an efficient nonlinear optimization solver. A local improvement phase and a branching technique were implemented in the CHA. Results of two tests using a distribution network are presented in the paper in order to show the ability of the proposed algorithm. ©2009 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Electrical impedance tomography (EIT) is an imaging technique that attempts to reconstruct the impedance distribution inside an object from the impedance between electrodes placed on the object surface. The EIT reconstruction problem can be approached as a nonlinear nonconvex optimization problem in which one tries to maximize the matching between a simulated impedance problem and the observed data. This nonlinear optimization problem is often ill-posed, and not very suited to methods that evaluate derivatives of the objective function. It may be approached by simulated annealing (SA), but at a large computational cost due to the expensive evaluation process of the objective function, which involves a full simulation of the impedance problem at each iteration. A variation of SA is proposed in which the objective function is evaluated only partially, while ensuring boundaries on the behavior of the modified algorithm.
Resumo:
We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.
Resumo:
Este trabalho apresenta um método de estimativa de torque do joelho baseado em sinais eletromiográficos (EMG) durante terapia de reabilitação robótica. Os EMGs, adquiridos de cinco músculos envolvidos no movimento de flexão e extensão do joelho, são processados para encontrar as ativações musculares. Em seguida, mediante um modelo simples de contração muscular, são calculadas as forças e, usando a geometria da articulação, o torque do joelho. As funções de ativação e contração musculares possuem parâmetros limitados que devem ser calibrados para cada usuário, sendo o ajuste feito mediante a minimização do erro entre o torque estimado e o torque medido na articulação usando a dinâmica inversa. São comparados dois métodos iterativos para funções não-lineares como técnicas de otimização restrita para a calibração dos parâmetros: Gradiente Descendente e Quasi-Newton. O processamento de sinais, calibração de parâmetros e cálculo de torque estimado foram desenvolvidos no software MATLAB®; o cálculo de torque medido foi feito no software OpenSim com sua ferramenta de dinâmica inversa.
Resumo:
O presente estudo considera a aplicação do modelo SISAGUA de simulação matemática e de otimização para a operação de sistemas de reservatórios integrados em sistemas complexos para o abastecimento de água. O SISAGUA utiliza a programação não linear inteira mista (PNLIM) com os objetivos de evitar ou minimizar racionamentos, equilibrar a distribuição dos armazenamentos em sistemas com múltiplos reservatórios e minimizar os custos de operação. A metodologia de otimização foi aplicada para o sistema produtor de água da Região Metropolitana de São Paulo (RMSP), que enfrenta a crise hídrica diante de um cenário de estiagem em 2013-2015, o pior na série histórica dos últimos 85 anos. Trata-se de uma região com 20,4 milhões de habitantes. O sistema é formado por oito sistemas produtores parcialmente integrados e operados pela Sabesp (Companhia de Saneamento do Estado de São Paulo). A RMSP é uma região com alta densidade demográfica, localizada na Bacia Hidrográfica do Alto Tietê e caracterizada pela baixa disponibilidade hídrica per capita. Foi abordada a possibilidade de considerar a evaporação durante as simulações, e a aplicação de uma regra de racionamento contínua nos reservatórios, que transforma a formulação do problema em programação não linear (PNL). A evaporação se mostrou pouco representativa em relação a vazão de atendimento à demanda, com cerca de 1% da vazão. Se por um lado uma vazão desta magnitude pode contribuir em um cenário crítico, por outro essa ordem de grandeza pode ser comparada às incertezas de medições ou previsões de afluências. O teste de sensibilidade das diferentes taxas de racionamento em função do volume armazenado permite analisar o tempo de resposta de cada sistema. A variação do tempo de recuperação, porém, não se mostrou muito significativo.
Resumo:
Water-sampler equilibrium partitioning coefficients and aqueous boundary layer mass transfer coefficients for atrazine, diuron, hexazionone and fluometuron onto C18 and SDB-RPS Empore disk-based aquatic passive samplers have been determined experimentally under a laminar flow regime (Re = 5400). The method involved accelerating the time to equilibrium of the samplers by exposing them to three water concentrations, decreasing stepwise to 50% and then 25% of the original concentration. Assuming first-order Fickian kinetics across a rate-limiting aqueous boundary layer, both parameters are determined computationally by unconstrained nonlinear optimization. In addition, a method of estimation of mass transfer coefficients-therefore sampling rates-using the dimensionless Sherwood correlation developed for laminar flow over a flat plate is applied. For each of the herbicides, this correlation is validated to within 40% of the experimental data. The study demonstrates that for trace concentrations (sub 0.1 mu g/L) and these flow conditions, a naked Empore disk performs well as an integrative sampler over short deployments (up to 7 days) for the range of polar herbicides investigated. The SDB-RPS disk allows a longer integrative period than the C18 disk due to its higher sorbent mass and/or its more polar sorbent chemistry. This work also suggests that for certain passive sampler designs, empirical estimation of sampling rates may be possible using correlations that have been available in the chemical engineering literature for some time.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
The use of the Design by Analysis (DBA) route is a modern trend in pressure vessel and piping international codes in mechanical engineering. However, to apply the DBA to structures under variable mechanical and thermal loads, it is necessary to assure that the plastic collapse modes, alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case), be precluded. The tool available to achieve this target is the shakedown theory. Unfortunately, the practical numerical applications of the shakedown theory result in very large nonlinear optimization problems with nonlinear constraints. Precise, robust and efficient algorithms and finite elements to solve this problem in finite dimension has been a more recent achievements. However, to solve real problems in an industrial level, it is necessary also to consider more realistic material properties as well as to accomplish 3D analysis. Limited kinematic hardening, is a typical property of the usual steels and it should be considered in realistic applications. In this paper, a new finite element with internal thermodynamical variables to model kinematic hardening materials is developed and tested. This element is a mixed ten nodes tetrahedron and through an appropriate change of variables is possible to embed it in a shakedown analysis software developed by Zouain and co-workers for elastic ideally-plastic materials, and then use it to perform 3D shakedown analysis in cases with limited kinematic hardening materials
Resumo:
The use of the Design by Analysis concept is a trend in modern pressure vessel and piping calculations. DBA flexibility allow us to deal with unexpected configurations detected at in-service inspections. It is also important, in life extension calculations, when deviations of the original standard hypotesis adopted initially in Design by Formula, can happen. To apply the DBA to structures under variable mechanic and thermal loads, it is necessary that, alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case), be precluded. These are two basic failure modes considered by ASME or European Standards in DBA. The shakedown theory is the tool available to achieve this goal. In order to apply it, is necessary only the range of the variable loads and the material properties. Precise, robust and efficient algorithms to solve the very large nonlinear optimization problems generated in numerical applications of the shakedown theory is a recent achievement. Zouain and co-workers developed one of these algorithms for elastic ideally-plastic materials. But, it is necessary to consider more realistic material properties in real practical applications. This paper shows an enhancement of this algorithm to dealing with limited kinematic hardening, a typical property of the usual steels. This is done using internal thermodynamic variables. A discrete algorithm is obtained using a plane stress, mixed finite element, with internal variable. An example, a beam encased in an end, under constant axial force and variable moment is presented to show the importance of considering the limited kinematic hardening in a shakedown analysis.
Resumo:
In design or safety assessment of mechanical structures, the use of the Design by Analysis (DBA) route is a modern trend. However, for making possible to apply DBA to structures under variable loads, two basic failure modes considered by ASME or European Standards must be precluded. Those modes are the alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case). Shakedown theory is a tool that permit us to assure that those kinds of failures will be avoided. However, in practical applications, very large nonlinear optimization problems are generated. Due to this facts, only in recent years have been possible to obtain algorithms sufficiently accurate, robust and efficient, for dealing with this class of problems. In this paper, one of these shakedown algorithms, developed for dealing with elastic ideally-plastic structures, is enhanced to include limited kinematic hardening, a more realistic material behavior. This is done in the continuous model by using internal thermodynamic variables. A corresponding discrete model is obtained using an axisymmetric mixed finite element with an internal variable. A thick wall sphere, under variable thermal and pressure loads, is used in an example to show the importance of considering the limited kinematic hardening in the shakedown calculations