897 resultados para REACH cost function
Resumo:
Multilevel algorithms are a successful class of optimization techniques that address the mesh partitioning problem for mapping meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimization method that refines the partition at each graph level. To date, these algorithms have been used almost exclusively to minimize the cut-edge weight in the graph with the aim of minimizing the parallel communication overhead. However, it has been shown that for certain classes of problems, the convergence of the underlying solution algorithm is strongly influenced by the shape or aspect ratio of the subdomains. Therefore, in this paper, the authors modify the multilevel algorithms to optimize a cost function based on the aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight, however it has been shown that for certain classes of solution algorithm, the convergence of the solver is strongly influenced by the subdomain aspect ratio. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.
Resumo:
We present a dynamic distributed load balancing algorithm for parallel, adaptive Finite Element simulations in which we use preconditioned Conjugate Gradient solvers based on domain-decomposition. The load balancing is designed to maintain good partition aspect ratio and we show that cut size is not always the appropriate measure in load balancing. Furthermore, we attempt to answer the question why the aspect ratio of partitions plays an important role for certain solvers. We define and rate different kinds of aspect ratio and present a new center-based partitioning method of calculating the initial distribution which implicitly optimizes this measure. During the adaptive simulation, the load balancer calculates a balancing flow using different versions of the diffusion algorithm and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. Experimental results for Bramble's preconditioner and comparisons to state-of-the-art load balancers show the benefits of the construction.
Resumo:
This paper considers a special class of flow-shop problems, known as the proportionate flow shop. In such a shop, each job flows through the machines in the same order and has equal processing times on the machines. The processing times of different jobs may be different. It is assumed that all operations of a job may be compressed by the same amount which will incur an additional cost. The objective is to minimize the makespan of the schedule together with a compression cost function which is non-decreasing with respect to the amount of compression. For a bicriterion problem of minimizing the makespan and a linear cost function, an O(n log n) algorithm is developed to construct the Pareto optimal set. For a single criterion problem, an O(n2) algorithm is developed to minimize the sum of the makespan and compression cost. Copyright © 1999 John Wiley & Sons, Ltd.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for distributing unstructured meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut edge weight in the graph with the aim of minimising the parallel communication overhead, but recently there has been a perceived need to take into account the communications network of the parallel machine. For example the increasing use of SMP clusters (systems of multiprocessor compute nodes with very fast intra-node communications but relatively slow inter-node networks) suggest the use of hierarchical network models. Indeed this requirement is exacerbated in the early experiments with meta-computers (multiple supercomputers combined together, in extreme cases over inter-continental networks). In this paper therefore, we modify a multilevel algorithm in order to minimise a cost function based on a model of the communications network. Several network models and variants of the algorithm are tested and we establish that it is possible to successfully guide the optimisation to reflect the chosen architecture.
Resumo:
In this paper we concentrate on the direct semi-blind spatial equalizer design for MIMO systems with Rayleigh fading channels. Our aim is to develop an algorithm which can outperform the classical training based method with the same training information used, and avoid the problems of low convergence speed and local minima due to pure blind methods. A general semi-blind cost function is first constructed which incorporates both the training information from the known data and some kind of higher order statistics (HOS) from the unknown sequence. Then, based on the developed cost function, we propose two semi-blind iterative and adaptive algorithms to find the desired spatial equalizer. To further improve the performance and convergence speed of the proposed adaptive method, we propose a technique to find the optimal choice of step size. Simulation results demonstrate the performance of the proposed algorithms and comparable schemes.
Resumo:
This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.
Resumo:
This letter introduces the convex variable step-size (CVSS) algorithm. The convexity of the resulting cost function is guaranteed. Simulations presented show that with the proposed algorithm, we obtain similar results, as with the VSS algorithm in initial convergence, while there are potential performance gains when abrupt changes occur.
Resumo:
Image segmentation plays an important role in the analysis of retinal images as the extraction of the optic disk provides important cues for accurate diagnosis of various retinopathic diseases. In recent years, gradient vector flow (GVF) based algorithms have been used successfully to successfully segment a variety of medical imagery. However, due to the compromise of internal and external energy forces within the resulting partial differential equations, these methods can lead to less accurate segmentation results in certain cases. In this paper, we propose the use of a new mean shift-based GVF segmentation algorithm that drives the internal/external energies towards the correct direction. The proposed method incorporates a mean shift operation within the standard GVF cost function to arrive at a more accurate segmentation. Experimental results on a large dataset of retinal images demonstrate that the presented method optimally detects the border of the optic disc.
Resumo:
For some time there is a large interest in variable step-size methods for adaptive filtering. Recently, a few stochastic gradient algorithms have been proposed, which are based on cost functions that have exponential dependence on the chosen error. However, we have experienced that the cost function based on exponential of the squared error does not always satisfactorily converge. In this paper we modify this cost function in order to improve the convergence of exponentiated cost function and the novel ECVSS (exponentiated convex variable step-size) stochastic gradient algorithm is obtained. The proposed technique has attractive properties in both stationary and abrupt-change situations. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
Reduced Order Models (ROMs) have proven to be a valid and efficient approach to model the thermal behaviour of building zones. The main issues associated with the use of zonal/lumped models are how to (1) divide the domain (lumps) and (2) evaluate the pa- rameters which characterise the lump-to-lump exchange of energy and momentum. The object of this research is to develop a methodology for the generation of ROMs from CFD models. The lumps of the ROM and their average property values are automatically ex- tracted from the CFD models through user defined constraints. This methodology has been applied to validated CFD models of a zone of the Environmental Research Insti- tute (ERI) Building in University College Cork (UCC). The ROM predicts temperature distribution in the domain with an average error lower than 2%. It is computationally efficient with an execution time of 3.45 seconds. Future steps in this research will be the development of the procedure to automatically extract the parameters which define lump-to-lump energy and momentum exchange. At the moment these parameters are evaluated through the minimisation of a cost function. The ROMs will also be utilised to predict the transient thermal behaviour of the building zone.
Resumo:
Demand Response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralised agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus it is desirable to use a scalable decentralised algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for Peak Minimisation (PM) based on Dantzig-Wolfe Decomposition (DWD). In addition, a Time Weighted Maximisation option is included in the cost function which improves the Quality of Service for devices seeking to receive their desired energy sooner rather than later. The paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.
Resumo:
The advantages of high energy efficiency and economic benefit promote the wide application of combined heat and power system (CHP) based microgrid. Firstly, a mathematical model of the CHP based microgrid is developed. Then, a cost function for the coordination of heat and electric load is proposed. Finally, an optimal dispatch model is developed to achieve the economical and coordinated operation of the CHP based microgrid system. Simulation results verify effectiveness of the proposed dispatch model, which is a powerful tool for the energy management of CHP based microgrid with renewable energy resources.
Resumo:
In this paper, we introduce a statistical data-correction framework that aims at improving the DSP system performance in presence of unreliable memories. The proposed signal processing framework implements best-effort error mitigation for signals that are corrupted by defects in unreliable storage arrays using a statistical correction function extracted from the signal statistics, a data-corruption model, and an application-specific cost function. An application example to communication systems demonstrates the efficacy of the proposed approach.