942 resultados para OPTIMIZATION TECHNIQUE


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Alopex is a correlation-based gradient-free optimization technique useful in many learning problems. However, there are no analytical results on the asymptotic behavior of this algorithm. This article presents a new version of Alopex that can be analyzed using techniques of two timescale stochastic approximation method. It is shown that the algorithm asymptotically behaves like a gradient-descent method, though it does not need (or estimate) any gradient information. It is also shown, through simulations, that the algorithm is quite effective.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Three algorithms for reactive power optimization are proposed in this paper with three different objective functions. The objectives in the proposed algorithm are to minimize the sum of the squares of the voltage deviations of the load buses, minimization of sum of squares of voltage stability L-indices of load buses (:3L2) algorithm, and also the objective of system real power loss (Ploss) minimization. The approach adopted is an iterative scheme with successive power flow analysis using decoupled technique and solution of the linear programming problem using upper bound optimization technique. Results obtained with all these objectives are compared. The analysis of these objective functions are presented to illustrate their advantages. It is observed comparing different objective functions it is possible to identify critical On Load Tap Changers (OLTCs) that should be made manual to avoid possible voltage instability due to their operation based on voltage improvement criteria under heavy load conditions. These algorithms have been tested under simulated conditions on few test systems. The results obtained on practical systems of 24-node equivalent EHV Indian power network, and for a 205 bus EHV system are presented for illustration purposes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper proposes a new approach for solving the state estimation problem. The approach is aimed at producing a robust estimator that rejects bad data, even if they are associated with leverage-point measurements. This is achieved by solving a sequence of Linear Programming (LP) problems. Optimization is carried via a new algorithm which is a combination of “upper bound optimization technique" and “an improved algorithm for discrete linear approximation". In this formulation of the LP problem, in addition to the constraints corresponding to the measurement set, constraints corresponding to bounds of state variables are also involved, which enables the LP problem more efficient in rejecting bad data, even if they are associated with leverage-point measurements. Results of the proposed estimator on IEEE 39-bus system and a 24-bus EHV equivalent system of the southern Indian grid are presented for illustrative purpose.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The polyhedral model provides an expressive intermediate representation that is convenient for the analysis and subsequent transformation of affine loop nests. Several heuristics exist for achieving complex program transformations in this model. However, there is also considerable scope to utilize this model to tackle the problem of automatic memory footprint optimization. In this paper, we present a new automatic storage optimization technique which can be used to achieve both intra-array as well as inter-array storage reuse with a pre-determined schedule for the computation. Our approach works by finding statement-wise storage partitioning hyper planes that partition a unified global array space so that values with overlapping live ranges are not mapped to the same partition. Our heuristic is driven by a fourfold objective function which not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter statement storage reuse. The storage mappings obtained using our heuristic can be asymptotically better than those obtained by any existing technique. We implement our technique and demonstrate its practical impact by evaluating its effectiveness on several benchmarks chosen from the domains of image processing, stencil computations, and high-performance computing.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Insulated-gate bipolar transistor (IGBT) power modules find widespread use in numerous power conversion applications where their reliability is of significant concern. Standard IGBT modules are fabricated for general-purpose applications while little has been designed for bespoke applications. However, conventional design of IGBTs can be improved by the multiobjective optimization technique. This paper proposes a novel design method to consider die-attachment solder failures induced by short power cycling and baseplate solder fatigue induced by the thermal cycling which are among major failure mechanisms of IGBTs. Thermal resistance is calculated analytically and the plastic work design is obtained with a high-fidelity finite-element model, which has been validated experimentally. The objective of minimizing the plastic work and constrain functions is formulated by the surrogate model. The nondominated sorting genetic algorithm-II is used to search for the Pareto-optimal solutions and the best design. The result of this combination generates an effective approach to optimize the physical structure of power electronic modules, taking account of historical environmental and operational conditions in the field.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Important research effort has been devoted to the topic of optimal planning of distribution systems. The non linear nature of the system, the need to consider a large number of scenarios and the increasing necessity to deal with uncertainties make optimal planning in distribution systems a difficult task. Heuristic techniques approaches have been proposed to deal with these issues, overcoming some of the inherent difficulties of classic methodologies. This paper considers several methodologies used to address planning problems of electrical power distribution networks, namely mixedinteger linear programming (MILP), ant colony algorithms (AC), genetic algorithms (GA), tabu search (TS), branch exchange (BE), simulated annealing (SA) and the Bender´s decomposition deterministic non-linear optimization technique (BD). Adequacy of theses techniques to deal with uncertainties is discussed. The behaviour of each optimization technique is compared from the point of view of the obtained solution and of the methodology performance. The paper presents results of the application of these optimization techniques to a real case of a 10-kV electrical distribution system with 201 nodes that feeds an urban area.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Two-Connected Network with Bounded Ring (2CNBR) problem is a network design problem addressing the connection of servers to create a survivable network with limited redirections in the event of failures. Particle Swarm Optimization (PSO) is a stochastic population-based optimization technique modeled on the social behaviour of flocking birds or schooling fish. This thesis applies PSO to the 2CNBR problem. As PSO is originally designed to handle a continuous solution space, modification of the algorithm was necessary in order to adapt it for such a highly constrained discrete combinatorial optimization problem. Presented are an indirect transcription scheme for applying PSO to such discrete optimization problems and an oscillating mechanism for averting stagnation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper uses a novel numerical optimization technique - robust optimization - that is well suited to solving the asset-liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximise the Sharpe ratio. We disaggregate pension liabilities into three components - active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes-Stein, and Black-Litterman models as well as the actual USS investment decisions. Over a 144 month out-of-sample period robust optimization is superior to the four benchmarks across 20 performance criteria, and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis contributes to the heuristic optimization of the p-median problem and Swedish population redistribution.   The p-median model is the most representative model in the location analysis. When facilities are located to a population geographically distributed in Q demand points, the p-median model systematically considers all the demand points such that each demand point will have an effect on the decision of the location. However, a series of questions arise. How do we measure the distances? Does the number of facilities to be located have a strong impact on the result? What scale of the network is suitable? How good is our solution? We have scrutinized a lot of issues like those. The reason why we are interested in those questions is that there are a lot of uncertainties in the solutions. We cannot guarantee our solution is good enough for making decisions. The technique of heuristic optimization is formulated in the thesis.   Swedish population redistribution is examined by a spatio-temporal covariance model. A descriptive analysis is not always enough to describe the moving effects from the neighbouring population. A correlation or a covariance analysis is more explicit to show the tendencies. Similarly, the optimization technique of the parameter estimation is required and is executed in the frame of statistical modeling. 

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents two different approaches to detect, locate, and characterize structural damage. Both techniques utilize electrical impedance in a first stage to locate the damaged area. In the second stage, to quantify the damage severity, one can use neural network, or optimization technique. The electrical impedance-based, which utilizes the electromechanical coupling property of piezoelectric materials, has shown engineering feasibility in a variety of practical field applications. Relying on high frequency structural excitations, this technique is very sensitive to minor structural changes in the near field of the piezoelectric sensors, and therefore, it is able to detect the damage in its early stage. Optimization approaches must be used for the case where a good condensed model is known, while neural network can be also used to estimate the nature of damage without prior knowledge of the model of the structure. The paper concludes with an experimental example in a welded cubic aluminum structure, in order to verify the performance of these two proposed methodologies.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cogeneration system design deals with several parameters in the synthesis phase, where not only a thermal cycle must be indicated but the general arrangement, type, capacity and number of machines need to be defined. This problem is not trivial because many parameters are considered as goals in the project. An optimization technique that considers costs and revenues, reliability, pollutant emissions and exergetic efficiency as goals to be reached in the synthesis phase of a cogeneration system design process is presented. A discussion of appropriated values and the results for a pulp and paper plant integration to a cogeneration system are shown in order to illustrate the proposed methodology.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cogeneration system design deals with several parameters in the synthesis phase, where not only a thermal cycle must be indicated but the general arrangement, type, capacity and number of machines need to be defined. This problem is not trivial because many parameters are considered as goals in the project. An optimization technique that considers costs and revenues, reliability, pollutant emissions and exergetic efficiency as goals to be reached in the synthesis phase of a cogeneration system design process is presented. A discussion of appropriated values and the results for a pulp and paper plant integration to a cogeneration system are shown in order to illustrate the proposed methodology.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Optical flow methods are accurate algorithms for estimating the displacement and velocity fields of objects in a wide variety of applications, being their performance dependent on the configuration of a set of parameters. Since there is a lack of research that aims to automatically tune such parameters, in this work we have proposed an evolutionary-based framework for such task, thus introducing three techniques for such purpose: Particle Swarm Optimization, Harmony Search and Social-Spider Optimization. The proposed framework has been compared against with the well-known Large Displacement Optical Flow approach, obtaining the best results in three out eight image sequences provided by a public dataset. Additionally, the proposed framework can be used with any other optimization technique.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Reactive power is critical to the operation of the power networks on both safety aspects and economic aspects. Unreasonable distribution of the reactive power would severely affect the power quality of the power networks and increases the transmission loss. Currently, the most economical and practical approach to minimizing the real power loss remains using reactive power dispatch method. Reactive power dispatch problem is nonlinear and has both equality constraints and inequality constraints. In this thesis, PSO algorithm and MATPOWER 5.1 toolbox are applied to solve the reactive power dispatch problem. PSO is a global optimization technique that is equipped with excellent searching capability. The biggest advantage of PSO is that the efficiency of PSO is less sensitive to the complexity of the objective function. MATPOWER 5.1 is an open source MATLAB toolbox focusing on solving the power flow problems. The benefit of MATPOWER is that its code can be easily used and modified. The proposed method in this thesis minimizes the real power loss in a practical power system and determines the optimal placement of a new installed DG. IEEE 14 bus system is used to evaluate the performance. Test results show the effectiveness of the proposed method.