997 resultados para aggregation operators


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Monotonicity with respect to all arguments is fundamental to the definition of aggregation functions, which are one of the basic tools in knowledge-based systems. The functions known as means (or averages) are idempotent and typically are monotone, however there are many important classes of means that are non-monotone. Weak monotonicity was recently proposed as a relaxation of the monotonicity condition for averaging functions. In this paper we discuss the concepts of directional and cone monotonicity, and monotonicity with respect to majority of inputs and coalitions of inputs. We establish the relations between various kinds of monotonicity, and illustrate it on various examples. We also provide a construction method for cone monotone functions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Monotonicity with respect to all arguments is fundamental to the definition of aggregation functions. It is also a limiting property that results in many important nonmonotonic averaging functions being excluded from the theoretical framework. This work proposes a definition for weakly monotonic averaging functions, studies some properties of this class of functions, and proves that several families of important nonmonotonic means are actually weakly monotonic averaging functions. Specifically, we provide sufficient conditions for weak monotonicity of the Lehmer mean and generalized mixture operators. We establish weak monotonicity of several robust estimators of location and conditions for weak monotonicity of a large class of penalty-based aggregation functions. These results permit a proof of the weak monotonicity of the class of spatial-tonal filters that include important members such as the bilateral filter and anisotropic diffusion. Our concept of weak monotonicity provides a sound theoretical and practical basis by which (monotonic) aggregation functions and nonmonotonic averaging functions can be related within the same framework, allowing us to bridge the gap between these previously disparate areas of research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Weak monotonicity was recently proposed as a relaxation of the monotonicity condition for averaging aggregation, and weakly monotone functions were shown to have desirable properties when averaging data corrupted with outliers or noise. We extended the study of weakly monotone averages by analyzing their ϕ-transforms, and we established weak monotonicity of several classes of averaging functions, in particular Gini means and mixture operators. Mixture operators with Gaussian weighting functions were shown to be weakly monotone for a broad range of their parameters. This study assists in identifying averaging functions suitable for data analysis and image processing tasks in the presence of outliers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The methodology for selecting the individual numerical scale and prioritization method has recently been presented and justified in the analytic hierarchy process (AHP). In this study, we further propose a novel AHP-group decision making (GDM) model in a local context (a unique criterion), based on the individual selection of the numerical scale and prioritization method. The resolution framework of the AHP-GDM with the individual numerical scale and prioritization method is first proposed. Then, based on linguistic Euclidean distance (LED) and linguistic minimum violations (LMV), the novel consensus measure is defined so that the consensus degree among decision makers who use different numerical scales and prioritization methods can be analyzed. Next, a consensus reaching model is proposed to help decision makers improve the consensus degree. In this consensus reaching model, the LED-based and LMV-based consensus rules are proposed and used. Finally, a new individual consistency index and its properties are proposed for the use of the individual numerical scale and prioritization method in the AHP-GDM. Simulation experiments and numerical examples are presented to demonstrate the validity of the proposed model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The web is continuously evolving into a collection of many data, which results in the interest to collect and merge these data in a meaningful way. Based on that web data, this paper describes the building of an ontology resting on fuzzy clustering techniques. Through continual harvesting folksonomies by web agents, an entire automatic fuzzy grassroots ontology is built. This self-updating ontology can then be used for several practical applications in fields such as web structuring, web searching and web knowledge visualization.A potential application for online reputation analysis, added value and possible future studies are discussed in the conclusion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

After studying several reduction algorithms that can be found in the literature, we notice that there is not an axiomatic definition of this concept. In this work we propose the definition of weak reduction operators and we propose the properties of the original image that reduced images must keep. From this definition, we study whether two methods of image reduction, undersampling and fuzzy transform, satisfy the conditions of weak reduction operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal-ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suffers from the well-known "drowning effect". Therefore, we may still want to obtain a consistent possibilistic knowledge base as the result of merging. In such a case, we argue that it is not always necessary to keep weighted information after merging. In this paper, we define a merging operator that maps a set of possibilistic knowledge bases and a formula representing the integrity constraints to a classical knowledge base by using lexicographic ordering. We show that it satisfies nine postulates that generalize basic postulates for propositional merging given in [11]. These postulates capture the principle of minimal change in some sense. We then provide an algorithm for generating the resulting knowledge base of our merging operator. Finally, we discuss the compatibility of our merging operator with propositional merging and establish the advantage of our merging operator over existing semantic merging operators in the propositional case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grid operators and electricity retailers in Ireland manage peak demand, power system balancing and grid congestion by offering relevant incentives to consumers to reduce or shift their load. The need for active consumers in the home using smart appliances has never been greater, due to increased variable renewable generation and grid constraints. In this paper an aggregated model of a single compressor fridge-freezer population is developed. A price control strategy is examined to quantify and value demand response savings during a representative winter and summer week for Ireland in 2020. The results show an average reduction in fridge-freezer operating cost of 8.2% during winter and significantly lower during summer in Ireland. A peak reduction of at least 68% of the average winter refrigeration load is achieved consistently during the week analysed using a staggering control mode. An analysis of the current ancillary service payments confirms that these are insufficient to ensure widespread uptake by the small consumer, and new mechanisms need to be developed to make becoming an active consumer attractive. Demand response is proposed as a new ancillary service called ramping capability, as the need for this service will increase with more renewable energy penetration on the power system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of competitive electricity markets has changed the consumers’ and distributed generation position power systems operation. The use of distributed generation and the participation in demand response programs, namely in smart grids, bring several advantages for consumers, aggregators, and system operators. The present paper proposes a remuneration structure for aggregated distributed generation and demand response resources. A virtual power player aggregates all the resources. The resources are aggregated in a certain number of clusters, each one corresponding to a distinct tariff group, according to the economic impact of the resulting remuneration tariff. The determined tariffs are intended to be used for several months. The aggregator can define the periodicity of the tariffs definition. The case study in this paper includes 218 consumers, and 66 distributed generation units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the problem of obtaining the weights of the ordered weighted aggregation (OWA) operators from observations. The problem is formulated as a restricted least squares and uniform approximation problems. We take full advantage of the linearity of the problem. In the former case, a well known technique of non-negative least squares is used. In a case of uniform approximation, we employ a recently developed cutting angle method of global optimisation. Both presented methods give results superior to earlier approaches, and do not require complicated nonlinear constructions. Additional restrictions, such as degree of orness of the operator, can be easily introduced

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image reduction is a crucial task in image processing, underpinning many practical applications. This work proposes novel image reduction operators based on non-monotonic averaging aggregation functions. The technique of penalty function minimisation is used to derive a novel mode-like estimator capable of identifying the most appropriate pixel value for representing a subset of the original image. Performance of this aggregation function and several traditional robust estimators of location are objectively assessed by applying image reduction within a facial recognition task. The FERET evaluation protocol is applied to confirm that these non-monotonic functions are able to sustain task performance compared to recognition using nonreduced images, as well as significantly improve performance on query images corrupted by noise. These results extend the state of the art in image reduction based on aggregation functions and provide a basis for efficiency and accuracy improvements in practical computer vision applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the case of real-valued inputs, averaging aggregation functions have been studied extensively with results arising in fields including probability and statistics, fuzzy decision-making, and various sciences. Although much of the behavior of aggregation functions when combining standard fuzzy membership values is well established, extensions to interval-valued fuzzy sets, hesitant fuzzy sets, and other new domains pose a number of difficulties. The aggregation of non-convex or discontinuous intervals is usually approached in line with the extension principle, i.e. by aggregating all real-valued input vectors lying within the interval boundaries and taking the union as the final output. Although this is consistent with the aggregation of convex interval inputs, in the non-convex case such operators are not idempotent and may result in outputs which do not faithfully summarize or represent the set of inputs. After giving an overview of the treatment of non-convex intervals and their associated interpretations, we propose a novel extension of the arithmetic mean based on penalty functions that provides a representative output and satisfies idempotency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a group decision making setting, we consider the potential impact an expert can have on the overall ranking by providing a biased assessment of the alternatives that differs substantially from the majority opinion. In the framework of similarity based averaging functions, we show that some alternative approaches to weighting the experts' inputs during the aggregation process can minimize the influence the biased expert is able to exert.