884 resultados para Graph-Based Linear Programming Modelling
Resumo:
In May 2006, the Ministers of Health of all the countries on the African continent, at a special session of the African Union, undertook to institutionalise efficiency monitoring within their respective national health information management systems. The specific objectives of this study were: (i) to assess the technical efficiency of National Health Systems (NHSs) of African countries for measuring male and female life expectancies, and (ii) to assess changes in health productivity over time with a view to analysing changes in efficiency and changes in technology. The analysis was based on a five-year panel data (1999-2003) from all the 53 countries of continental Africa. Data Envelopment Analysis (DEA) - a non-parametric linear programming approach - was employed to assess the technical efficiency. Malmquist Total Factor Productivity (MTFP) was used to analyse efficiency and productivity change over time among the 53 countries' national health systems. The data consisted of two outputs (male and female life expectancies) and two inputs (per capital total health expenditure and adult literacy). The DEA revealed that 49 (92.5%) countries' NHSs were run inefficiently in 1999 and 2000; 50 (94.3%), 48 (90.6%) and 47 (88.7%) operated inefficiently in 2001, 2002, and 2003 respectively. All the 53 countries' national health systems registered improvements in total factor productivity attributable mainly to technical progress. Fifty-two countries did not experience any change in scale efficiency, while thirty (56.6%) countries' national health systems had a Pure Efficiency Change (PEFFCH) index of less than one, signifying that those countries' NHSs pure efficiency contributed negatively to productivity change. All the 53 countries' national health systems registered improvements in total factor productivity, attributable mainly to technical progress. Over half of the countries' national health systems had a pure efficiency index of less than one, signifying that those countries' NHSs pure efficiency contributed negatively to productivity change. African countries may need to critically evaluate the utility of institutionalising Malmquist TFP type of analyses to monitor changes in health systems economic efficiency and productivity over time. African national health systems, per capita total health expenditure, technical efficiency, scale efficiency, Malmquist indices of productivity change, DEA
Resumo:
This paper explores the use of the optimisation procedures in SAS/OR software with application to the measurement of efficiency and productivity of decision-making units (DMUs) using data envelopment analysis (DEA) techniques. DEA was originally introduced by Charnes et al. [J. Oper. Res. 2 (1978) 429] is a linear programming method for assessing the efficiency and productivity of DMUs. Over the last two decades, DEA has gained considerable attention as a managerial tool for measuring performance of organisations and it has widely been used for assessing the efficiency of public and private sectors such as banks, airlines, hospitals, universities and manufactures. As a result, new applications with more variables and more complicated models are being introduced. Further to successive development of DEA a non-parametric productivity measure, Malmquist index, has been introduced by Fare et al. [J. Prod. Anal. 3 (1992) 85]. Employing Malmquist index, productivity growth can be decomposed into technical change and efficiency change. On the other hand, the SAS is a powerful software and it is capable of running various optimisation problems such as linear programming with all types of constraints. To facilitate the use of DEA and Malmquist index by SAS users, a SAS/MALM code was implemented in the SAS programming language. The SAS macro developed in this paper selects the chosen variables from a SAS data file and constructs sets of linear-programming models based on the selected DEA. An example is given to illustrate how one could use the code to measure the efficiency and productivity of organisations.
Resumo:
The generalised transportation problem (GTP) is an extension of the linear Hitchcock transportation problem. However, it does not have the unimodularity property, which means the linear programming solution (like the simplex method) cannot guarantee to be integer. This is a major difference between the GTP and the Hitchcock transportation problem. Although some special algorithms, such as the generalised stepping-stone method, have been developed, but they are based on the linear programming model and the integer solution requirement of the GTP is relaxed. This paper proposes a genetic algorithm (GA) to solve the GTP and a numerical example is presented to show the algorithm and its efficiency.
Resumo:
This thesis presents a theoretical investigation on applications of Raman effect in optical fibre communication as well as the design and optimisation of various Raman based devices and transmission schemes. The techniques used are mainly based on numerical modelling. The results presented in this thesis are divided into three main parts. First, novel designs of Raman fibre lasers (RFLs) based on Phosphosilicate core fibre are analysed and optimised for efficiency by using a discrete power balance model. The designs include a two stage RFL based on Phosphosilicate core fibre for telecommunication applications, a composite RFL for the 1.6 μm spectral window, and a multiple output wavelength RFL aimed to be used as a compact pump source for fiat gain Raman amplifiers. The use of Phosphosilicate core fibre is proven to effectively reduce the design complexity and hence leads to a better efficiency, stability and potentially lower cost. Second, the generalised Raman amplified gain model approach based on the power balance analysis and direct numerical simulation is developed. The approach can be used to effectively simulate optical transmission systems with distributed Raman amplification. Last, the potential employment of a hybrid amplification scheme, which is a combination between a distributed Raman amplifier and Erbium doped amplifier, is investigated by using the generalised Raman amplified gain model. The analysis focuses on the use of the scheme to upgrade a standard fibre network to 40 Gb/s system.
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
Financial institutes are an integral part of any modern economy. In the 1970s and 1980s, Gulf Cooperation Council (GCC) countries made significant progress in financial deepening and in building a modern financial infrastructure. This study aims to evaluate the performance (efficiency) of financial institutes (banking sector) in GCC countries. Since, the selected variables include negative data for some banks and positive for others, and the available evaluation methods are not helpful in this case, so we developed a Semi Oriented Radial Model to perform this evaluation. Furthermore, since the SORM evaluation result provides a limited information for any decision maker (bankers, investors, etc...), we proposed a second stage analysis using classification and regression (C&R) method to get further results combining SORM results with other environmental data (Financial, economical and political) to set rules for the efficient banks, hence, the results will be useful for bankers in order to improve their bank performance and to the investors, maximize their returns. Mainly there are two approaches to evaluate the performance of Decision Making Units (DMUs), under each of them there are different methods with different assumptions. Parametric approach is based on the econometric regression theory and nonparametric approach is based on a mathematical linear programming theory. Under the nonparametric approaches, there are two methods: Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). While there are three methods under the parametric approach: Stochastic Frontier Analysis (SFA); Thick Frontier Analysis (TFA) and Distribution-Free Analysis (DFA). The result shows that DEA and SFA are the most applicable methods in banking sector, but DEA is seem to be most popular between researchers. However DEA as SFA still facing many challenges, one of these challenges is how to deal with negative data, since it requires the assumption that all the input and output values are non-negative, while in many applications negative outputs could appear e.g. losses in contrast with profit. Although there are few developed Models under DEA to deal with negative data but we believe that each of them has it is own limitations, therefore we developed a Semi-Oriented-Radial-Model (SORM) that could handle the negativity issue in DEA. The application result using SORM shows that the overall performance of GCC banking is relatively high (85.6%). Although, the efficiency score is fluctuated over the study period (1998-2007) due to the second Gulf War and to the international financial crisis, but still higher than the efficiency score of their counterpart in other countries. Banks operating in Saudi Arabia seem to be the highest efficient banks followed by UAE, Omani and Bahraini banks, while banks operating in Qatar and Kuwait seem to be the lowest efficient banks; this is because these two countries are the most affected country in the second Gulf War. Also, the result shows that there is no statistical relationship between the operating style (Islamic or Conventional) and bank efficiency. Even though there is no statistical differences due to the operational style, but Islamic bank seem to be more efficient than the Conventional bank, since on average their efficiency score is 86.33% compare to 85.38% for Conventional banks. Furthermore, the Islamic banks seem to be more affected by the political crisis (second Gulf War), whereas Conventional banks seem to be more affected by the financial crisis.
Resumo:
This paper explores the use of the optimization procedures in SAS/OR software with application to the contemporary logistics distribution network design using an integrated multiple criteria decision making approach. Unlike the traditional optimization techniques, the proposed approach, combining analytic hierarchy process (AHP) and goal programming (GP), considers both quantitative and qualitative factors. In the integrated approach, AHP is used to determine the relative importance weightings or priorities of alternative warehouses with respect to both deliverer oriented and customer oriented criteria. Then, a GP model incorporating the constraints of system, resource, and AHP priority is formulated to select the best set of warehouses without exceeding the limited available resources. To facilitate the use of integrated multiple criteria decision making approach by SAS users, an ORMCDM code was implemented in the SAS programming language. The SAS macro developed in this paper selects the chosen variables from a SAS data file and constructs sets of linear programming models based on the selected GP model. An example is given to illustrate how one could use the code to design the logistics distribution network.
Resumo:
Data envelopment analysis (DEA) as introduced by Charnes, Cooper, and Rhodes (1978) is a linear programming technique that has widely been used to evaluate the relative efficiency of a set of homogenous decision making units (DMUs). In many real applications, the input-output variables cannot be precisely measured. This is particularly important in assessing efficiency of DMUs using DEA, since the efficiency score of inefficient DMUs are very sensitive to possible data errors. Hence, several approaches have been proposed to deal with imprecise data. Perhaps the most popular fuzzy DEA model is based on a-cut. One drawback of the a-cut approach is that it cannot include all information about uncertainty. This paper aims to introduce an alternative linear programming model that can include some uncertainty information from the intervals within the a-cut approach. We introduce the concept of "local a-level" to develop a multi-objective linear programming to measure the efficiency of DMUs under uncertainty. An example is given to illustrate the use of this method.
Resumo:
This paper examines the problems in the definition of the General Non-Parametric Corporate Performance (GNCP) and introduces a multiplicative linear programming as an alternative model for corporate performance. We verified and tested a statistically significant difference between the two models based on the application of 27 UK industries using six performance ratios. Our new model is found to be a more robust performance model than the previous standard Data Envelopment Analysis (DEA) model.
Resumo:
In the contemporary customer-driven supply chain, maximization of customer service plays an equally important role as minimization of costs for a company to retain and increase its competitiveness. This article develops a multiple-criteria optimization approach, combining the analytic hierarchy process (AHP) and an integer linear programming (ILP) model, to aid the design of an optimal logistics distribution network. The proposed approach outperforms traditional cost-based optimization techniques because it considers both quantitative and qualitative factors and also aims at maximizing the benefits of deliverer and customers. In the approach, the AHP is used to determine the relative importance weightings or priorities of alternative warehouses with respect to some critical customer-oriented criteria. The results of AHP prioritization are utilized as the input of the ILP model, the objective of which is to select the best warehouses at the lowest possible cost. In this article, two commercial packages are used: including Expert Choice and LINDO.
Resumo:
AMS subject classification: 90C05, 90A14.
Resumo:
In this paper, we develop a new graph kernel by using the quantum Jensen-Shannon divergence and the discrete-time quantum walk. To this end, we commence by performing a discrete-time quantum walk to compute a density matrix over each graph being compared. For a pair of graphs, we compare the mixed quantum states represented by their density matrices using the quantum Jensen-Shannon divergence. With the density matrices for a pair of graphs to hand, the quantum graph kernel between the pair of graphs is defined by exponentiating the negative quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets, and demonstrate the effectiveness of the new kernel.
Resumo:
The popularity of online social media platforms provides an unprecedented opportunity to study real-world complex networks of interactions. However, releasing this data to researchers and the public comes at the cost of potentially exposing private and sensitive user information. It has been shown that a naive anonymization of a network by removing the identity of the nodes is not sufficient to preserve users’ privacy. In order to deal with malicious attacks, k -anonymity solutions have been proposed to partially obfuscate topological information that can be used to infer nodes’ identity. In this paper, we study the problem of ensuring k anonymity in time-varying graphs, i.e., graphs with a structure that changes over time, and multi-layer graphs, i.e., graphs with multiple types of links. More specifically, we examine the case in which the attacker has access to the degree of the nodes. The goal is to generate a new graph where, given the degree of a node in each (temporal) layer of the graph, such a node remains indistinguishable from other k-1 nodes in the graph. In order to achieve this, we find the optimal partitioning of the graph nodes such that the cost of anonymizing the degree information within each group is minimum. We show that this reduces to a special case of a Generalized Assignment Problem, and we propose a simple yet effective algorithm to solve it. Finally, we introduce an iterated linear programming approach to enforce the realizability of the anonymized degree sequences. The efficacy of the method is assessed through an extensive set of experiments on synthetic and real-world graphs.
Resumo:
In this paper, we use the quantum Jensen-Shannon divergence as a means to establish the similarity between a pair of graphs and to develop a novel graph kernel. In quantum theory, the quantum Jensen-Shannon divergence is defined as a distance measure between quantum states. In order to compute the quantum Jensen-Shannon divergence between a pair of graphs, we first need to associate a density operator with each of them. Hence, we decide to simulate the evolution of a continuous-time quantum walk on each graph and we propose a way to associate a suitable quantum state with it. With the density operator of this quantum state to hand, the graph kernel is defined as a function of the quantum Jensen-Shannon divergence between the graph density operators. We evaluate the performance of our kernel on several standard graph datasets from bioinformatics. We use the Principle Component Analysis (PCA) on the kernel matrix to embed the graphs into a feature space for classification. The experimental results demonstrate the effectiveness of the proposed approach. © 2013 Springer-Verlag.
Resumo:
Graph-based representations have been used with considerable success in computer vision in the abstraction and recognition of object shape and scene structure. Despite this, the methodology available for learning structural representations from sets of training examples is relatively limited. In this paper we take a simple yet effective Bayesian approach to attributed graph learning. We present a naïve node-observation model, where we make the important assumption that the observation of each node and each edge is independent of the others, then we propose an EM-like approach to learn a mixture of these models and a Minimum Message Length criterion for components selection. Moreover, in order to avoid the bias that could arise with a single estimation of the node correspondences, we decide to estimate the sampling probability over all the possible matches. Finally we show the utility of the proposed approach on popular computer vision tasks such as 2D and 3D shape recognition. © 2011 Springer-Verlag.