30 resultados para Linear programming

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the problem of learning fuzzy measures from empirical data. Values of the discrete Choquet integral are fitted to the data in the least absolute deviation sense. This problem is solved by linear programming techniques. We consider the cases when the data are given on the numerical and interval scales. An open source programming library which facilitates calculations involving fuzzy measures and their learning from data is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines the construction of aggregation functions from data by minimizing the least absolute deviation criterion. We formulate various instances of such problems as linear programming problems. We consider the cases in which the data are provided as intervals, and the outputs ordering needs to be preserved, and show that linear programming formulation is valid for such cases. This feature is very valuable in practice, since the standard simplex method can be used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the zero-order Sugeno Fuzzy Inference System (FIS) that preserves the monotonicity property is studied. The sufficient conditions for the zero-order Sugeno FIS model to satisfy the monotonicity property are exploited as a set of useful governing equations to facilitate the FIS modelling process. The sufficient conditions suggest a fuzzy partition (at the rule antecedent part) and a monotonically-ordered rule base (at the rule consequent part) that can preserve the monotonicity property. The investigation focuses on the use of two Similarity Reasoning (SR)-based methods, i.e., Analogical Reasoning (AR) and Fuzzy Rule Interpolation (FRI), to deduce each conclusion separately. It is shown that AR and FRI may not be a direct solution to modelling of a multi-input FIS model that fulfils the monotonicity property, owing to the difficulty in getting a set of monotonically-ordered conclusions. As such, a Non-Linear Programming (NLP)-based SR scheme for constructing a monotonicity-preserving multi-input FIS model is proposed. In the proposed scheme, AR or FRI is first used to predict the rule conclusion of each observation. Then, a search algorithm is adopted to look for a set of consequents with minimized root means square errors as compared with the predicted conclusions. A constraint imposed by the sufficient conditions is also included in the search process. Applicability of the proposed scheme to undertaking fuzzy Failure Mode and Effect Analysis (FMEA) tasks is demonstrated. The results indicate that the proposed NLP-based SR scheme is useful for preserving the monotonicity property for building a multi-input FIS model with an incomplete rule base.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generalized Bonferroni mean is able to capture some interaction effects between variables and model mandatory requirements. We present a number of weights identification algorithms we have developed in the R programming language in order to model data using the generalized Bonferroni mean subject to various preferences. We then compare its accuracy when fitting to the journal ranks dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solving fuzzy linear programming (FLP) requires the employment of a consistent ranking of fuzzy numbers. Ineffective fuzzy number ranking would lead to a flawed and erroneous solving approach. This paper presents a comprehensive and extensive review on fuzzy number ranking methods. Ranking techniques are categorised into six classes based on their characteristics. They include centroid methods, distance methods, area methods, lexicographical methods, methods based on decision maker's viewpoint, and methods based on left and right spreads. A survey on solving approaches to FLP is also reported. We then point out errors in several existing methods that are relevant to the ranking of fuzzy numbers and thence suggest an effective method to solve FLP. Consequently, FLP problems are converted into non-fuzzy single (or multiple) objective linear programming based on a consistent centroid-based ranking of fuzzy numbers. Solutions of FLP are then obtained by solving corresponding crisp single (or multiple) objective programming problems by conventional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since asset returns have been recognized as not normally distributed, the avenue of research regarding portfolio higher moments soon emerged. To account for uncertainty and vagueness of portfolio returns as well as of higher moment risks, we proposed a new portfolio selection model employing fuzzy sets in this paper. A fuzzy multi-objective linear programming (MOLP) for portfolio optimization is formulated using marginal impacts of assets on portfolio higher moments, which are modelled by trapezoidal fuzzy numbers. Through a consistent centroid-based ranking of fuzzy numbers, the fuzzy MOLP is transformed into an MOLP that is then solved by the maximin method. By taking portfolio higher moments into account, the approach enables investors to optimize not only the normal risk (variance) but also the asymmetric risk (skewness) and the risk of fat-tails (kurtosis). An illustrative example demonstrates the efficiency of the proposed methodology comparing to previous portfolio optimization models.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

 Some illustrative examples are provided to identify the ineffective and unrealistic characteristics of existing approaches to solving fuzzy linear programming (FLP) problems (with single or multiple objectives). We point out the error in existing methods concerning the ranking of fuzzy numbers and thence suggest an effective method to solve the FLP. Based on the consistent centroid-based ranking of fuzzy numbers, the FLP problems are transformed into non-fuzzy single (or multiple) objective linear programming. Solutions of FLP are then crisp single or multiple objective programming problems, which can respectively be obtained by conventional methods.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Kidney Exchange Problem (KEP) is a combinatorial optimization problem and has attracted the attention from the community of integer programming/combinatorial optimisation in the past few years. Defined on a directed graph, the KEP has two variations: one concerns cycles only, and the other, cycles as well as chains on the same graph. We call the former a Cardinality Constrained Multi-cycle Problem (CCMcP) and the latter a Cardinality Constrained Cycles and Chains Problem (CCCCP). The cardinality for cycles is restricted in both CCMcP and CCCCP. As for chains, some studies in the literature considered cardinality restrictions, whereas others did not. The CCMcP can be viewed as an Asymmetric Travelling Salesman Problem that does allow subtours, however these subtours are constrained by cardinality, and that it is not necessary to visit all vertices. In existing literature of the KEP, the cardinality constraint for cycles is usually considered to be small (to the best of our knowledge, no more than six). In a CCCCP, each vertex on the directed graph can be included in at most one cycle or chain, but not both. The CCMcP and the CCCCP are interesting and challenging combinatorial optimization problems in their own rights, particularly due to their similarities to some travelling salesman- and vehicle routing-family of problems. In this paper, our main focus is to review the existing mathematical programming models and solution methods in the literature, analyse the performance of these models, and identify future research directions. Further, we propose a polynomial-sized and an exponential-sized mixed-integer linear programming model, discuss a number of stronger constraints for cardinality-infeasible-cycle elimination for the latter, and present some preliminary numerical results.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this study, the authors address a new problem of finding, with a pre-specified time, bounds of partial states of non-linear discrete systems with a time-varying delay. A novel computational method for deriving the smallest bounds is presented. The method is based on a new comparison principle, a new algorithm for finding the infimum of a fractal function, and linear programming. The effectiveness of our obtained results is illustrated through a numerical example.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We consider an optimization problem in ecology where our objective is to maximize biodiversity with respect to different land-use allocations. As it turns out, the main problem can be framed as learning the weights of a weighted arithmetic mean where the objective is the geometric mean of its outputs. We propose methods for approximating solutions to this and similar problems, which are non-linear by nature, using linear and bilevel techniques.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This brief considers a new problem of designing reduced-order positive linear functional observers for positive time-delay systems. The order of the designed functional observers is equal to the dimension of the functional state vector to be estimated. The designed functional observers always nonnegative at any time and they converge asymptotically to the true functional state vector. Moreover, conditions for the existence of such positive linear functional observers are formulated in terms of linear programming (LP). Numerical examples and simulation results are given to illustrate the effectiveness of the proposed design method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a new technique to perform unsupervised data classification (clustering) based on density induced metric and non-smooth optimization. Our goal is to automatically recognize multidimensional clusters of non-convex shape. We present a modification of the fuzzy c-means algorithm, which uses the data induced metric, defined with the help of Delaunay triangulation. We detail computation of the distances in such a metric using graph algorithms. To find optimal positions of cluster prototypes we employ the discrete gradient method of non-smooth optimization. The new clustering method is capable to identify non-convex overlapped d-dimensional clusters.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper estimates productivity growth in Malaysian manufacturing over the period 1983-1999. Malmquist productivity Indices (MPIs) have been computed using non parametric Data Envelopment Analysis (DEA) type linear programming, which show productivity growth sourced from efficiency change and growth in technology. Unlike previous studies, this study identifies the sources of productivity growth in Malaysian manufacturing industries at the five digit breakdown of Malaysian Standard Industrial Classification (MSIC) thereby revealing more industry specific efficiency and technical growth patterns. Results indicated that a high majority of the industries operated with low levels of technical efficiency with little or no improvement over time. Growth estimates revealed that two third of the industries (76 out of total 114 categories) experienced average annual productivity improvement ranging from 0.1% to 7.8%. Average annual technical progress was recorded by 95 industry categories while technical efficiency improvement was achieved by 53 industries. Overall yearly average indicated relatively low productivity growth from the mid 1990’s onwards caused by either efficiency decline or technical regress. Summary results for industries showed that some of the high rates of productivity growth have been recorded in glass and glass products (7.3%), Petroleum and coal (7.2%), industrial chemicals (4.9%) contributed from both efficiency improvement and technical progress ranging from 0.8% to 5.4% and from 1.7% to 4.1%, respectively. These results are expected to have some implications for ongoing and future strategic policy reform in Malaysian manufacturing generating a more sustainable growth for specific industry categories.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The asymmetric travelling salesman problem with replenishment arcs (RATSP), arising from work related to aircraft routing, is a generalisation of the well-known ATSP. In this paper, we introduce a polynomial size mixed-integer linear programming (MILP) formulation for the RATSP, and improve an existing exponential size ILP formulation of Zhu [The aircraft rotation problem, Ph.D. Thesis, Georgia Institute of Technology, Atlanta, 1994] by proposing two classes of stronger cuts. We present results that under certain conditions, these two classes of stronger cuts are facet-defining for the RATS polytope, and that ATSP facets can be lifted, to give RATSP facets. We implement our polyhedral findings and develop a Lagrangean relaxation (LR)-based branch-and-bound (BNB) algorithm for the RATSP, and compare this method with solving the polynomial size formulation using ILOG Cplex 9.0, using both randomly generated problems and aircraft routing problems. Finally we compare our methods with the existing method of Boland et al. [The asymmetric traveling salesman problem with replenishment arcs, European J. Oper. Res. 123 (2000) 408–427]. It turns out that both of our methods are much faster than that of Boland et al. [The asymmetric traveling salesman problem with replenishment arcs, European J. Oper. Res. 123 (2000) 408–427], and that the LR-based BNB method is more efficient for problems that resemble the aircraft rotation problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes a new approach to multivariate scattered data smoothing. It is assumed that the data are generated by a Lipschitz continuous function f, and include random noise to be filtered out. The proposed approach uses known, or estimated value of the Lipschitz constant of f, and forces the data to be consistent with the Lipschitz properties of f. Depending on the assumptions about the distribution of the random noise, smoothing is reduced to a standard quadratic or a linear programming problem. We discuss an efficient algorithm which eliminates the redundant inequality constraints. Numerical experiments illustrate applicability and efficiency of the method. This approach provides an efficient new tool of multivariate scattered data approximation.