132 resultados para Lipschitz aggregation operators


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter gives an overview of aggregation functions toward their use in recommender systems. Simple aggregation functions such as the arithmetic mean are often employed to aggregate user features, item ratings, measures of similarity, etc., however many other aggregation functions exist which could deliver increased accuracy and flexibility to many systems. We provide definitions of some important families and properties, sophisticated methods of construction, and various examples of aggregation functions in the domain of recommender systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predicting protein functions computationally from massive protein–protein interaction (PPI) data generated by high-throughput technology is one of the challenges and fundamental problems in the post-genomic era. Although there have been many approaches developed for computationally predicting protein functions, the mutual correlations among proteins in terms of protein functions have not been thoroughly investigated and incorporated into existing prediction methods, especially in voting based prediction methods. In this paper, we propose an innovative method to predict protein functions from PPI data by aggregating the functional correlations among relevant proteins using the Choquet-Integral in fuzzy theory. This functional aggregation measures the real impact of each relevant protein function on the final prediction results, and reduces the impact of repeated functional information on the prediction. Accordingly, a new protein similarity and a new iterative prediction algorithm are proposed in this paper. The experimental evaluations on real PPI datasets demonstrate the effectiveness of our method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the concept of strong equality index, staring from the definition of strong inclusion given by Dubois and Prade in 1980, We also present a construction method based on the use of implication operators and two specific properties of the implications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines and analyzes different aggregation algorithms to improve accuracy of forecasts obtained using neural network (NN) ensembles. These algorithms include equal-weights combination of Best NN models, combination of trimmed forecasts, and Bayesian Model Averaging (BMA). The predictive performance of these algorithms are evaluated using Australian electricity demand data. The output of the aggregation algorithms of NN ensembles are compared with a Naive approach. Mean absolute percentage error is applied as the performance index for assessing the quality of aggregated forecasts. Through comprehensive simulations, it is found that the aggregation algorithms can significantly improve the forecasting accuracies. The BMA algorithm also demonstrates the best performance amongst aggregation algorithms investigated in this study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important task in multiple-criteria decision making is how to learn the weights and parameters of an aggregation function from empirical data. We consider this in the context of quantifying ecological diversity, where such data is to be obtained as a set of pairwise comparisons specifying that one community should be considered more diverse than another. A problem that arises is how to collect a sufficient amount of data for reliable model determination without overloading individuals with the number of comparisons they need to make. After providing an algorithm for determining criteria weights and an overall ranking from such information, we then investigate the improvement in accuracy if ranked 3-tuples are supplied instead of pairs. We found that aggregation models could be determined accurately from significantly fewer 3-tuple comparisons than pairs. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Results of a numerical exercise, substituting a numerical operator by an artificial neural network (ANN) are presented in this paper. The numerical operator used is the explicit form of the finite difference (FD) scheme. The FD scheme was used to discretize the one-dimensional transport equation, which included both the advection and dispersion terms. Inputs to the ANN are the FD representation of the transport equation, and the concentration was designated as the output. Concentration values used for training the ANN were obtained from analytical solutions. The numerical operator was reconstructed from a back calculation of the weights of the ANN. Linear transfer functions were used for this purpose. The ANN was able to accurately recover the velocity used in the training data, but not the dispersion coefficient. This capability was improved when numerical dispersion was taken into account; however, it is limited to the condition: C/P<0.5 , where C is the Courant number and P , the Peclet number (i.e., the restriction imposed by the Neumann stability condition).