63 resultados para Linear programming models

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since asset returns have been recognized as not normally distributed, the avenue of research regarding portfolio higher moments soon emerged. To account for uncertainty and vagueness of portfolio returns as well as of higher moment risks, we proposed a new portfolio selection model employing fuzzy sets in this paper. A fuzzy multi-objective linear programming (MOLP) for portfolio optimization is formulated using marginal impacts of assets on portfolio higher moments, which are modelled by trapezoidal fuzzy numbers. Through a consistent centroid-based ranking of fuzzy numbers, the fuzzy MOLP is transformed into an MOLP that is then solved by the maximin method. By taking portfolio higher moments into account, the approach enables investors to optimize not only the normal risk (variance) but also the asymmetric risk (skewness) and the risk of fat-tails (kurtosis). An illustrative example demonstrates the efficiency of the proposed methodology comparing to previous portfolio optimization models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Determining the causal structure of a domain is frequently a key task in the area of Data Mining and Knowledge Discovery. This paper introduces ensemble learning into linear causal model discovery, then examines several algorithms based on different ensemble strategies including Bagging, Adaboost and GASEN. Experimental results show that (1) Ensemble discovery algorithm can achieve an improved result compared with individual causal discovery algorithm in terms of accuracy; (2) Among all examined ensemble discovery algorithms, BWV algorithm which uses a simple Bagging strategy works excellently compared to other more sophisticated ensemble strategies; (3) Ensemble method can also improve the stability of parameter estimation. In addition, Ensemble discovery algorithm is amenable to parallel and distributed processing, which is important for data mining in large data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the problem of learning fuzzy measures from empirical data. Values of the discrete Choquet integral are fitted to the data in the least absolute deviation sense. This problem is solved by linear programming techniques. We consider the cases when the data are given on the numerical and interval scales. An open source programming library which facilitates calculations involving fuzzy measures and their learning from data is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines the construction of aggregation functions from data by minimizing the least absolute deviation criterion. We formulate various instances of such problems as linear programming problems. We consider the cases in which the data are provided as intervals, and the outputs ordering needs to be preserved, and show that linear programming formulation is valid for such cases. This feature is very valuable in practice, since the standard simplex method can be used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the zero-order Sugeno Fuzzy Inference System (FIS) that preserves the monotonicity property is studied. The sufficient conditions for the zero-order Sugeno FIS model to satisfy the monotonicity property are exploited as a set of useful governing equations to facilitate the FIS modelling process. The sufficient conditions suggest a fuzzy partition (at the rule antecedent part) and a monotonically-ordered rule base (at the rule consequent part) that can preserve the monotonicity property. The investigation focuses on the use of two Similarity Reasoning (SR)-based methods, i.e., Analogical Reasoning (AR) and Fuzzy Rule Interpolation (FRI), to deduce each conclusion separately. It is shown that AR and FRI may not be a direct solution to modelling of a multi-input FIS model that fulfils the monotonicity property, owing to the difficulty in getting a set of monotonically-ordered conclusions. As such, a Non-Linear Programming (NLP)-based SR scheme for constructing a monotonicity-preserving multi-input FIS model is proposed. In the proposed scheme, AR or FRI is first used to predict the rule conclusion of each observation. Then, a search algorithm is adopted to look for a set of consequents with minimized root means square errors as compared with the predicted conclusions. A constraint imposed by the sufficient conditions is also included in the search process. Applicability of the proposed scheme to undertaking fuzzy Failure Mode and Effect Analysis (FMEA) tasks is demonstrated. The results indicate that the proposed NLP-based SR scheme is useful for preserving the monotonicity property for building a multi-input FIS model with an incomplete rule base.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generalized Bonferroni mean is able to capture some interaction effects between variables and model mandatory requirements. We present a number of weights identification algorithms we have developed in the R programming language in order to model data using the generalized Bonferroni mean subject to various preferences. We then compare its accuracy when fitting to the journal ranks dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solving fuzzy linear programming (FLP) requires the employment of a consistent ranking of fuzzy numbers. Ineffective fuzzy number ranking would lead to a flawed and erroneous solving approach. This paper presents a comprehensive and extensive review on fuzzy number ranking methods. Ranking techniques are categorised into six classes based on their characteristics. They include centroid methods, distance methods, area methods, lexicographical methods, methods based on decision maker's viewpoint, and methods based on left and right spreads. A survey on solving approaches to FLP is also reported. We then point out errors in several existing methods that are relevant to the ranking of fuzzy numbers and thence suggest an effective method to solve FLP. Consequently, FLP problems are converted into non-fuzzy single (or multiple) objective linear programming based on a consistent centroid-based ranking of fuzzy numbers. Solutions of FLP are then obtained by solving corresponding crisp single (or multiple) objective programming problems by conventional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Industrial producers face the task of optimizing production process in an attempt to achieve the desired quality such as mechanical properties with the lowest energy consumption. In industrial carbon fiber production, the fibers are processed in bundles containing (batches) several thousand filaments and consequently the energy optimization will be a stochastic process as it involves uncertainty, imprecision or randomness. This paper presents a stochastic optimization model to reduce energy consumption a given range of desired mechanical properties. Several processing condition sets are developed and for each set of conditions, 50 samples of fiber are analyzed for their tensile strength and modulus. The energy consumption during production of the samples is carefully monitored on the processing equipment. Then, five standard distribution functions are examined to determine those which can best describe the distribution of mechanical properties of filaments. To verify the distribution goodness of fit and correlation statistics, the Kolmogorov-Smirnov test is used. In order to estimate the selected distribution (Weibull) parameters, the maximum likelihood, least square and genetic algorithm methods are compared. An array of factors including the sample size, the confidence level, and relative error of estimated parameters are used for evaluating the tensile strength and modulus properties. The energy consumption and N2 gas cost are modeled by Convex Hull method. Finally, in order to optimize the carbon fiber production quality and its energy consumption and total cost, mixed integer linear programming is utilized. The results show that using the stochastic optimization models, we are able to predict the production quality in a given range and minimize the energy consumption of its industrial process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Kidney Exchange Problem (KEP) is a combinatorial optimization problem and has attracted the attention from the community of integer programming/combinatorial optimisation in the past few years. Defined on a directed graph, the KEP has two variations: one concerns cycles only, and the other, cycles as well as chains on the same graph. We call the former a Cardinality Constrained Multi-cycle Problem (CCMcP) and the latter a Cardinality Constrained Cycles and Chains Problem (CCCCP). The cardinality for cycles is restricted in both CCMcP and CCCCP. As for chains, some studies in the literature considered cardinality restrictions, whereas others did not. The CCMcP can be viewed as an Asymmetric Travelling Salesman Problem that does allow subtours, however these subtours are constrained by cardinality, and that it is not necessary to visit all vertices. In existing literature of the KEP, the cardinality constraint for cycles is usually considered to be small (to the best of our knowledge, no more than six). In a CCCCP, each vertex on the directed graph can be included in at most one cycle or chain, but not both. The CCMcP and the CCCCP are interesting and challenging combinatorial optimization problems in their own rights, particularly due to their similarities to some travelling salesman- and vehicle routing-family of problems. In this paper, our main focus is to review the existing mathematical programming models and solution methods in the literature, analyse the performance of these models, and identify future research directions. Further, we propose a polynomial-sized and an exponential-sized mixed-integer linear programming model, discuss a number of stronger constraints for cardinality-infeasible-cycle elimination for the latter, and present some preliminary numerical results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Efficiently inducing precise causal models accurately reflecting given data sets is the ultimate goal of causal discovery. The algorithms proposed by Dai et al. has demonstrated the ability of the Minimum Message Length (MML) principle in discovering Linear Causal Models from training data. In order to further explore ways to improve efficiency, this paper incorporates the Hoeffding Bounds into the learning process. At each step of causal discovery, if a small number of data items is enough to distinguish the better model from the rest, the computation cost will be reduced by ignoring the other data items. Experiments with data set from related benchmark models indicate that the new algorithm achieves speedup over previous work in terms of learning efficiency while preserving the discovery accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes two integer programming models and their GA-based solutions for optimal concept learning. The models are built to obtain the optimal concept description in the form of propositional logic formulas from examples based on completeness, consistency and simplicity. The simplicity of the propositional rules is selected as the objective function of the integer programming models, and the completeness and consistency of the concept are used as the constraints. Considering the real-world problems that certain level of noise is contained in data set, the constraints in model 11 are slacked by adding slack-variables. To solve the integer programming models, genetic algorithm is employed to search the global solution space. We call our approach IP-AE. Its effectiveness is verified by comparing the experimental results with other well- known concept learning algorithms: AQ15 and C4.5.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Determining the causal structure of a domain is a key task in the area of Data Mining and Knowledge Discovery.The algorithm proposed by Wallace et al. [15] has demonstrated its strong ability in discovering Linear Causal Models from given data sets. However, some experiments showed that this algorithm experienced difficulty in discovering linear relations with small deviation, and it occasionally gives a negative message length, which should not be allowed. In this paper, a more efficient and precise MML encoding scheme is proposed to describe the model structure and the nodes in a Linear Causal Model. The estimation of different parameters is also derived. Empirical results show that the new algorithm outperformed the previous MML-based algorithm in terms of both speed and precision.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One common drawback in algorithms for learning Linear Causal Models is that they can not deal with incomplete data set. This is unfortunate since many real problems involve missing data or even hidden variable. In this paper, based on multiple imputation, we propose a three-step process to learn linear causal models from incomplete data set. Experimental results indicate that this algorithm is better than the single imputation method (EM algorithm) and the simple list deletion method, and for lower missing rate, this algorithm can even find models better than the results from the greedy learning algorithm MLGS working in a complete data set. In addition, the method is amenable to parallel or distributed processing, which is an important characteristic for data mining in large data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One major difficulty frustrating the application of linear causal models is that they are not easily adapted to cope with discrete data. This is unfortunate since most real problems involve both continuous and discrete variables. In this paper, we consider a class of graphical models which allow both continuous and discrete variables, and propose the parameter estimation method and a structure discovery algorithm based on Minimum Message Length and parameter estimation. Experimental results are given to demonstrate the potential for the application of this method.