991 resultados para penalty-based aggregation functions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Implication and aggregation functions play important complementary roles in the field of fuzzy logic. Both have been intensively investigated since the early 1980s, revealing a tight relationship between them. However, the main results regarding this relationship, published by Fodor and Demirli DeBaets in the 1990s, have been poorly disseminated and are nowadays somewhat obsolete due to the subsequent advances in the field. The present paper deals with the translation of the classical logical equivalence p → q = ¬pvq, often called material implication, to the fuzzy framework, which establishes a one-to-one correspondence between implication functions and disjunctors (the class of aggregation functions that extend the Boolean disjunction to the unit interval). The construction of implication functions from disjunctors via negation functions, and vice versa, is reviewed, stressing the properties of disjunctors (respectively, implication functions) that ensure certain properties of implication functions (disjunctors).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter gives an overview of aggregation functions and their use in recommender systems. The classical weighted average lies at the heart of various recommendation mechanisms, often being employed to combine item feature scores or predict ratings from similar users. Some improvements to accuracy and robustness can be achieved by aggregating different measures of similarity or using an average of recommendations obtained through different techniques. Advances made in the theory of aggregation functions therefore have the potential to deliver increased performance to many recommender systems. We provide definitions of some important families and properties, sophisticated methods of construction, and various examples of aggregation functions in the domain of recommender systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of supervised learning techniques for fitting weights and/or generator functions of weighted quasi-arithmetic means – a special class of idempotent and nondecreasing aggregation functions – to empirical data has already been considered in a number of papers. Nevertheless, there are still some important issues that have not been discussed in the literature yet. In the first part of this two-part contribution we deal with the concept of regularization, a quite standard technique from machine learning applied so as to increase the fit quality on test and validation data samples. Due to the constraints on the weighting vector, it turns out that quite different methods can be used in the current framework, as compared to regression models. Moreover, it is worth noting that so far fitting weighted quasi-arithmetic means to empirical data has only been performed approximately, via the so-called linearization technique. In this paper we consider exact solutions to such special optimization tasks and indicate cases where linearization leads to much worse solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image reduction is a crucial task in image processing, underpinning many practical applications. This work proposes novel image reduction operators based on non-monotonic averaging aggregation functions. The technique of penalty function minimisation is used to derive a novel mode-like estimator capable of identifying the most appropriate pixel value for representing a subset of the original image. Performance of this aggregation function and several traditional robust estimators of location are objectively assessed by applying image reduction within a facial recognition task. The FERET evaluation protocol is applied to confirm that these non-monotonic functions are able to sustain task performance compared to recognition using nonreduced images, as well as significantly improve performance on query images corrupted by noise. These results extend the state of the art in image reduction based on aggregation functions and provide a basis for efficiency and accuracy improvements in practical computer vision applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimization methods have been used in many areas of knowledge, such as Engineering, Statistics, Chemistry, among others, to solve optimization problems. In many cases it is not possible to use derivative methods, due to the characteristics of the problem to be solved and/or its constraints, for example if the involved functions are non-smooth and/or their derivatives are not know. To solve this type of problems a Java based API has been implemented, which includes only derivative-free optimization methods, and that can be used to solve both constrained and unconstrained problems. For solving constrained problems, the classic Penalty and Barrier functions were included in the API. In this paper a new approach to Penalty and Barrier functions, based on Fuzzy Logic, is proposed. Two penalty functions, that impose a progressive penalization to solutions that violate the constraints, are discussed. The implemented functions impose a low penalization when the violation of the constraints is low and a heavy penalty when the violation is high. Numerical results, obtained using twenty-eight test problems, comparing the proposed Fuzzy Logic based functions to six of the classic Penalty and Barrier functions are presented. Considering the achieved results, it can be concluded that the proposed penalty functions besides being very robust also have a very good performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The web is a rich resource for information discovery, as a result web mining is a hot topic. However, a reliable mining result depends on the reliability of the data set. For every single second, the web generate huge amount of data, such as web page requests, file transportation. The data reflect human behavior in the cyber space and therefore valuable for our analysis in various disciplines, e.g. social science, network security. How to deposit the data is a challenge. An usual strategy is to save the abstract of the data, such as using aggregation functions to preserve the features of the original data with much smaller space. A key problem, however is that such information can be distorted by the presence of illegitimate traffic, e.g. botnet recruitment scanning, DDoS attack traffic, etc. An important consideration in web related knowledge discovery then is the robustness of the aggregation method , which in turn may be affected by the reliability of network traffic data. In this chapter, we first present the methods of aggregation functions, and then we employe information distances to filter out anomaly data as a preparation for web data mining.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In image processing, particularly in image reduction, averaging aggregation functions play an important role. In this work we study the aggregation of color values (RGB) and we present an image reduction algorithm for RGB color images. For this purpose, we define and study aggregation functions and penalty functions in product lattices. We show how the arithmetic mean and the median can be obtained by minimizing specific penalty functions. Moreover, we study other penalty functions and we show that, in general, aggregation functions on product lattices do not coincide with the cartesian product of the corresponding aggregation functions. Finally, we make an experimental study where we test our reduction algorithm and we analyze the stability of the penalty functions in images affected by noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we study the relation between restricted dissimilarity functions-and, more generally, dissimilarity-like functions- and penalty functions and the possibility of building the latter using the former. Several results on convexity and quasiconvexity are also considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Averaging behaviour of aggregation functions depends on the fundamental property of monotonicity with respect to all arguments. Unfortunately this is a limiting property that ensures that many important averaging functions are excluded from the theoretical framework. We propose a definition for weakly monotone averaging functions to encompass the averaging aggregation functions in a framework with many commonly used non-monotonic means. Weakly monotonic averages are robust to outliers and noise, making them extremely important in practical applications. We show that several robust estimators of location are actually weakly monotone and we provide sufficient conditions for weak monotonicity of the Lehmer and Gini means and some mixture functions. In particular we show that mixture functions with Gaussian kernels, which arise frequently in image and signal processing applications, are actually weakly monotonic averages. Our concept of weak monotonicity provides a sound theoretical and practical basis for understanding both monotone and non-monotone averaging functions within the same framework. This allows us to effectively relate these previously disparate areas of research and gain a deeper understanding of averaging aggregation methods. © Springer International Publishing Switzerland 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Weak monotonicity was recently proposed as a relaxation of the monotonicity condition for averaging aggregation, and weakly monotone functions were shown to have desirable properties when averaging data corrupted with outliers or noise. We extended the study of weakly monotone averages by analyzing their ϕ-transforms, and we established weak monotonicity of several classes of averaging functions, in particular Gini means and mixture operators. Mixture operators with Gaussian weighting functions were shown to be weakly monotone for a broad range of their parameters. This study assists in identifying averaging functions suitable for data analysis and image processing tasks in the presence of outliers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a group decision making setting, we consider the potential impact an expert can have on the overall ranking by providing a biased assessment of the alternatives that differs substantially from the majority opinion. In the framework of similarity based averaging functions, we show that some alternative approaches to weighting the experts' inputs during the aggregation process can minimize the influence the biased expert is able to exert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a framework for eliciting and aggregating pairwise preference relations based on the assumption of an underlying fuzzy partial order. We also propose some linear programming optimization methods for ensuring consistency either as part of the aggregation phase or as a pre- or post-processing task. We contend that this framework of pairwise-preference relations, based on the Kemeny distance, can be less sensitive to extreme or biased opinions and is also less complex to elicit from experts. We provide some examples and outline their relevant properties and associated concepts.