996 resultados para penalty functions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we present the concept of penalty function over a Cartesian product of lattices. To build these mappings, we make use of restricted dissimilarity functions and distances between fuzzy sets. We also present an algorithm that extends the weighted voting method for a fuzzy preference relation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we introduce the definition of restricted dissimilarity functions and we link it with some other notions, such as metrics. In particular, we also show how restricted dissimilarity functions can be used to build penalty functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In image processing, particularly in image reduction, averaging aggregation functions play an important role. In this work we study the aggregation of color values (RGB) and we present an image reduction algorithm for RGB color images. For this purpose, we define and study aggregation functions and penalty functions in product lattices. We show how the arithmetic mean and the median can be obtained by minimizing specific penalty functions. Moreover, we study other penalty functions and we show that, in general, aggregation functions on product lattices do not coincide with the cartesian product of the corresponding aggregation functions. Finally, we make an experimental study where we test our reduction algorithm and we analyze the stability of the penalty functions in images affected by noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we study the relation between restricted dissimilarity functions-and, more generally, dissimilarity-like functions- and penalty functions and the possibility of building the latter using the former. Several results on convexity and quasiconvexity are also considered.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Optimization methods have been used in many areas of knowledge, such as Engineering, Statistics, Chemistry, among others, to solve optimization problems. In many cases it is not possible to use derivative methods, due to the characteristics of the problem to be solved and/or its constraints, for example if the involved functions are non-smooth and/or their derivatives are not know. To solve this type of problems a Java based API has been implemented, which includes only derivative-free optimization methods, and that can be used to solve both constrained and unconstrained problems. For solving constrained problems, the classic Penalty and Barrier functions were included in the API. In this paper a new approach to Penalty and Barrier functions, based on Fuzzy Logic, is proposed. Two penalty functions, that impose a progressive penalization to solutions that violate the constraints, are discussed. The implemented functions impose a low penalization when the violation of the constraints is low and a heavy penalty when the violation is high. Numerical results, obtained using twenty-eight test problems, comparing the proposed Fuzzy Logic based functions to six of the classic Penalty and Barrier functions are presented. Considering the achieved results, it can be concluded that the proposed penalty functions besides being very robust also have a very good performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We introduce an algebraic operator framework to study discounted penalty functions in renewal risk models. For inter-arrival and claim size distributions with rational Laplace transform, the usual integral equation is transformed into a boundary value problem, which is solved by symbolic techniques. The factorization of the differential operator can be lifted to the level of boundary value problems, amounting to iteratively solving first-order problems. This leads to an explicit expression for the Gerber-Shiu function in terms of the penalty function.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the case of real-valued inputs, averaging aggregation functions have been studied extensively with results arising in fields including probability and statistics, fuzzy decision-making, and various sciences. Although much of the behavior of aggregation functions when combining standard fuzzy membership values is well established, extensions to interval-valued fuzzy sets, hesitant fuzzy sets, and other new domains pose a number of difficulties. The aggregation of non-convex or discontinuous intervals is usually approached in line with the extension principle, i.e. by aggregating all real-valued input vectors lying within the interval boundaries and taking the union as the final output. Although this is consistent with the aggregation of convex interval inputs, in the non-convex case such operators are not idempotent and may result in outputs which do not faithfully summarize or represent the set of inputs. After giving an overview of the treatment of non-convex intervals and their associated interpretations, we propose a novel extension of the arithmetic mean based on penalty functions that provides a representative output and satisfies idempotency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new approach that can easily incorporate any generic penalty function into the diffuse optical tomographic image reconstruction is introduced to show the utility of nonquadratic penalty functions. The penalty functions that were used include quadratic (l(2)), absolute (l(1)), Cauchy, and Geman-McClure. The regularization parameter in each of these cases was obtained automatically by using the generalized cross-validation method. The reconstruction results were systematically compared with each other via utilization of quantitative metrics, such as relative error and Pearson correlation. The reconstruction results indicate that, while the quadratic penalty may be able to provide better separation between two closely spaced targets, its contrast recovery capability is limited, and the sparseness promoting penalties, such as l(1), Cauchy, and Geman-McClure have better utility in reconstructing high-contrast and complex-shaped targets, with the Geman-McClure penalty being the most optimal one. (C) 2013 Optical Society of America

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents new methods for computing the step sizes of the subband-adaptive iterative shrinkage-thresholding algorithms proposed by Bayram & Selesnick and Vonesch & Unser. The method yields tighter wavelet-domain bounds of the system matrix, thus leading to improved convergence speeds. It is directly applicable to non-redundant wavelet bases, and we also adapt it for cases of redundant frames. It turns out that the simplest and most intuitive setting for the step sizes that ignores subband aliasing is often satisfactory in practice. We show that our methods can be used to advantage with reweighted least squares penalty functions as well as L1 penalties. We emphasize that the algorithms presented here are suitable for performing inverse filtering on very large datasets, including 3D data, since inversions are applied only to diagonal matrices and fast transforms are used to achieve all matrix-vector products.