870 resultados para Semi-infinite linear programming
Resumo:
Koopmans gyakorlati problémák megoldása során szerzett tapasztalatait általánosítva fogott hozzá a lineáris tevékenységelemzési modell kidolgozásához. Meglepődve tapasztalta, hogy a korabeli közgazdaságtan nem rendelkezett egységes, kellően egzakt termeléselmélettel és fogalomrendszerrel. Úttörő dolgozatában ezért - mintegy a lineáris tevékenységelemzési modell elméleti kereteként - lerakta a technológiai halmazok fogalmán nyugvó axiomatikus termeléselmélet alapjait is. Nevéhez fűződik a termelési hatékonyság és a hatékonysági árak fogalmának egzakt definíciója, s az egymást kölcsönösen feltételező viszonyuk igazolása a lineáris tevékenységelemzési modell keretében. A hatékonyság manapság használatos, pusztán műszaki szempontból értelmezett definícióját Koopmans csak sajátos esetként tárgyalta, célja a gazdasági hatékonyság fogalmának a bevezetése és elemzése volt. Dolgozatunkban a lineáris programozás dualitási tételei segítségével rekonstruáljuk ez utóbbira vonatkozó eredményeit. Megmutatjuk, hogy egyrészt bizonyításai egyenértékűek a lineáris programozás dualitási tételeinek igazolásával, másrészt a gazdasági hatékonysági árak voltaképpen a mai értelemben vett árnyékárak. Rámutatunk arra is, hogy a gazdasági hatékonyság értelmezéséhez megfogalmazott modellje az Arrow-Debreu-McKenzie-féle általános egyensúlyelméleti modellek közvetlen előzményének tekinthető, tartalmazta azok szinte minden lényeges elemét és fogalmát - az egyensúlyi árak nem mások, mint a Koopmans-féle hatékonysági árak. Végezetül újraértelmezzük Koopmans modelljét a vállalati technológiai mikroökonómiai leírásának lehetséges eszközeként. Journal of Economic Literature (JEL) kód: B23, B41, C61, D20, D50. /===/ Generalizing from his experience in solving practical problems, Koopmans set about devising a linear model for analysing activity. Surprisingly, he found that economics at that time possessed no uniform, sufficiently exact theory of production or system of concepts for it. He set out in a pioneering study to provide a theoretical framework for a linear model for analysing activity by expressing first the axiomatic bases of production theory, which rest on the concept of technological sets. He is associated with exact definition of the concept of production efficiency and efficiency prices, and confirmation of their relation as mutual postulates within the linear model of activity analysis. Koopmans saw the present, purely technical definition of efficiency as a special case; he aimed to introduce and analyse the concept of economic efficiency. The study uses the duality precepts of linear programming to reconstruct the results for the latter. It is shown first that evidence confirming the duality precepts of linear programming is equal in value, and secondly that efficiency prices are really shadow prices in today's sense. Furthermore, the model for the interpretation of economic efficiency can be seen as a direct predecessor of the Arrow–Debreu–McKenzie models of general equilibrium theory, as it contained almost every essential element and concept of them—equilibrium prices are nothing other than Koopmans' efficiency prices. Finally Koopmans' model is reinterpreted as a necessary tool for microeconomic description of enterprise technology.
Resumo:
A distance-based inconsistency indicator, defined by the third author for the consistency-driven pairwise comparisons method, is extended to the incomplete case. The corresponding optimization problem is transformed into an equivalent linear programming problem. The results can be applied in the process of filling in the matrix as the decision maker gets automatic feedback. As soon as a serious error occurs among the matrix elements, even due to a misprint, a significant increase in the inconsistency index is reported. The high inconsistency may be alarmed not only at the end of the process of filling in the matrix but also during the completion process. Numerical examples are also provided.
Resumo:
Ebben a tanulmányban a szerző egy új harmóniakereső metaheurisztikát mutat be, amely a minimális időtartamú erőforrás-korlátos ütemezések halmazán a projekt nettó jelenértékét maximalizálja. Az optimális ütemezés elméletileg két egész értékű (nulla-egy típusú) programozási feladat megoldását jelenti, ahol az első lépésben meghatározzuk a minimális időtartamú erőforrás-korlátos ütemezések időtartamát, majd a második lépésben az optimális időtartamot feltételként kezelve megoldjuk a nettó jelenérték maximalizálási problémát minimális időtartamú erőforrás-korlátos ütemezések halmazán. A probléma NP-hard jellege miatt az egzakt megoldás elfogadható idő alatt csak kisméretű projektek esetében képzelhető el. A bemutatandó metaheurisztika a Csébfalvi (2007) által a minimális időtartamú erőforrás-korlátos ütemezések időtartamának meghatározására és a tevékenységek ennek megfelelő ütemezésére kifejlesztett harmóniakereső metaheurisztika továbbfejlesztése, amely az erőforrás-felhasználási konfliktusokat elsőbbségi kapcsolatok beépítésével oldja fel. Az ajánlott metaheurisztika hatékonyságának és életképességének szemléltetésére számítási eredményeket adunk a jól ismert és népszerű PSPLIB tesztkönyvtár J30 részhalmazán futtatva. Az egzakt megoldás generálásához egy korszerű MILP-szoftvert (CPLEX) alkalmaztunk. _______________ This paper presents a harmony search metaheuristic for the resource-constrained project scheduling problem with discounted cash flows. In the proposed approach, a resource-constrained project is characterized by its „best” schedule, where best means a makespan minimal resource constrained schedule for which the net present value (NPV) measure is maximal. Theoretically the optimal schedule searching process is formulated as a twophase mixed integer linear programming (MILP) problem, which can be solved for small-scale projects in reasonable time. The applied metaheuristic is based on the "conflict repairing" version of the "Sounds of Silence" harmony search metaheuristic developed by Csébfalvi (2007) for the resource-constrained project scheduling problem (RCPSP). In order to illustrate the essence and viability of the proposed harmony search metaheuristic, we present computational results for a J30 subset from the well-known and popular PSPLIB. To generate the exact solutions a state-of-the-art MILP solver (CPLEX) was used.
Resumo:
A készpénz-optimalizálás az operációkutatás régóta kutatott területe. Ebben a cikkben valós adatokon mutatok be egy banki készpénz-optimalizálást, melyet lineáris programozási feladatok segítségével végeztem el. A cikkben összehasonlítottam a determinisztikus és a sztochasztikus megközelítéseket is. A hagyományos készpénz-optimalizáción két területen léptem túl: egyrészt vizsgáltam a bankfiók valutagazdálkodását is, másrészről a bankfiókok közötti készpénzszállítás lehetőségét is. A vegyes egészértékű lineáris programozási feladatok megoldására a glpk nevű szabad hozzáférésű szoftvert használtam, így a cikkből képet kaphatunk a megoldó (solver) felhasználhatóságáról és korlátairól is. ___________ In recent years both operational research and quantitative ¯nance have paid much attention to cash management issues. In this paper we present a cash management study which is based on real world data and uses a mixed integer linear programming (MILP) model as the main tool. In the paper we compare deterministic and stochastic approaches. The classical cash management problem is extended in two ways: we considered the possibility of bank offices keeping more than one currency and also investigated the opportunity of cash transports between bank offices. The MILP problem was solved with glpk (GNU Linear Programming Kit), a free software. The reader can also get a feel of how to use this solver.
Resumo:
A szerző az alkalmazott többszektoros modellezés területén a lineáris programozási modellektől a számszerűsített általános egyensúlyi modellekig végbement változásokat tekinti át. Egy rövid történeti visszapillantás után a lineáris programozás módszereire épülő nemzetgazdasági szintű modellekkel összevetve mutatja be az általános egyensúlyi modellek közös, illetve eltérő jellemzőit. Egyidejűleg azt is érzékelteti, hogyan lehet az általános egyensúlyi modelleket a gazdaságpolitikai célok konzisztenciájának, a célok közötti átváltási lehetőségek elemzésére és általában a gazdaságpolitikai elképzelések érzékenységi vizsgálatára felhasználni. A szerző az elméleti-módszertani kérdések taglalását számszerűsített általános egyensúlyi modell segítségével illusztrálja. _______ The author surveys the changes having taken place in the field of multi-sector modeling, from the linear programming models to the quantified general equilibrium models. After a brief historical retrospection he presents the common and different characteristic features of the general equilibrium models by comparing them with the national economic level models based on the methods of linear programming. He also makes clear how the general equilibrium models can be used for analysing the consistency of economic policy targets, for the investigation of trade-off possibilities among the targets and, in general, for sensitivity analyses of economic policy targets. The discussion of theoretical and methodological quuestions is illustrated by the author with the aid of a quantified general equilibrium model.
Resumo:
We present a general model to find the best allocation of a limited amount of supplements (extra minutes added to a timetable in order to reduce delays) on a set of interfering railway lines. By the best allocation, we mean the solution under which the weighted sum of expected delays is minimal. Our aim is to finely adjust an already existing and well-functioning timetable. We model this inherently stochastic optimization problem by using two-stage recourse models from stochastic programming, building upon earlier research from the literature. We present an improved formulation, allowing for an efficient solution using a standard algorithm for recourse models. We show that our model may be solved using any of the following theoretical frameworks: linear programming, stochastic programming and convex non-linear programming, and present a comparison of these approaches based on a real-life case study. Finally, we introduce stochastic dependency into the model, and present a statistical technique to estimate the model parameters from empirical data.
Resumo:
It has widely been agreed that the distorted price system is one of the causes of inefficient ecooomic decisions in centrally planned economies. The paper investigates the possible effect of a price reform on the allocation of resources in a situation where micro-efficiency remains unchanged. Foreign trade and endogenously induced terms-of-trade changes are focal points ín the multisectoral applied general equilibrium analysis. Special attention is paid to some methodological problems connected to the representation of foreign trade in such models. The adoption of Armington's assumption leads to an export demand function and this in turn gives rise to the question of optimal export structure, different from the equilibrium one-an aspect so far neglected in the related literature. The results show, that the applied model allows for a more flexible handling of the overspecialization problem, than the linear programming models. It also becomes evident that the use of export demand functions brings unwanted terms-of-trade changes into the model, to be avoided by a suitable reformulation of the model. The analysis also suggests, that a price reform alone does not significantly increase global economic efficiency. Thus the effect of an economic reform on micro-efficiency appears to be a more crucial factor. The author raises in conclusion some rather general questions related to the foreign trade practice of small open economies.
Resumo:
Next-generation integrated wireless local area network (WLAN) and 3G cellular networks aim to take advantage of the roaming ability in a cellular network and the high data rate services of a WLAN. To ensure successful implementation of an integrated network, many issues must be carefully addressed, including network architecture design, resource management, quality-of-service (QoS), call admission control (CAC) and mobility management. ^ This dissertation focuses on QoS provisioning, CAC, and the network architecture design in the integration of WLANs and cellular networks. First, a new scheduling algorithm and a call admission control mechanism in IEEE 802.11 WLAN are presented to support multimedia services with QoS provisioning. The proposed scheduling algorithms make use of the idle system time to reduce the average packet loss of realtime (RT) services. The admission control mechanism provides long-term transmission quality for both RT and NRT services by ensuring the packet loss ratio for RT services and the throughput for non-real-time (NRT) services. ^ A joint CAC scheme is proposed to efficiently balance traffic load in the integrated environment. A channel searching and replacement algorithm (CSR) is developed to relieve traffic congestion in the cellular network by using idle channels in the WLAN. The CSR is optimized to minimize the system cost in terms of the blocking probability in the interworking environment. Specifically, it is proved that there exists an optimal admission probability for passive handoffs that minimizes the total system cost. Also, a method of searching the probability is designed based on linear-programming techniques. ^ Finally, a new integration architecture, Hybrid Coupling with Radio Access System (HCRAS), is proposed for lowering the average cost of intersystem communication (IC) and the vertical handoff latency. An analytical model is presented to evaluate the system performance of the HCRAS in terms of the intersystem communication cost function and the handoff cost function. Based on this model, an algorithm is designed to determine the optimal route for each intersystem communication. Additionally, a fast handoff algorithm is developed to reduce the vertical handoff latency.^
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments
Resumo:
Cooperative communication has gained much interest due to its ability to exploit the broadcasting nature of the wireless medium to mitigate multipath fading. There has been considerable amount of research on how cooperative transmission can improve the performance of the network by focusing on the physical layer issues. During the past few years, the researchers have started to take into consideration cooperative transmission in routing and there has been a growing interest in designing and evaluating cooperative routing protocols. Most of the existing cooperative routing algorithms are designed to reduce the energy consumption; however, packet collision minimization using cooperative routing has not been addressed yet. This dissertation presents an optimization framework to minimize collision probability using cooperative routing in wireless sensor networks. More specifically, we develop a mathematical model and formulate the problem as a large-scale Mixed Integer Non-Linear Programming problem. We also propose a solution based on the branch and bound algorithm augmented with reducing the search space (branch and bound space reduction). The proposed strategy builds up the optimal routes from each source to the sink node by providing the best set of hops in each route, the best set of relays, and the optimal power allocation for the cooperative transmission links. To reduce the computational complexity, we propose two near optimal cooperative routing algorithms. In the first near optimal algorithm, we solve the problem by decoupling the optimal power allocation scheme from optimal route selection. Therefore, the problem is formulated by an Integer Non-Linear Programming, which is solved using a branch and bound space reduced method. In the second near optimal algorithm, the cooperative routing problem is solved by decoupling the transmission power and the relay node se- lection from the route selection. After solving the routing problems, the power allocation is applied in the selected route. Simulation results show the algorithms can significantly reduce the collision probability compared with existing cooperative routing schemes.
Resumo:
I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.
In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.
Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.
I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and
discuss some implications for capital regulation policy and stress testing.
Resumo:
This paper compares two linear programming (LP) models for shift scheduling in services where homogeneously-skilled employees are available at limited times. Although both models are based on set covering approaches, one explicitly matches employees to shifts, while the other imposes this matching implicitly. Each model is used in three forms—one with complete, another with very limited meal break placement flexibility, and a third without meal breaks—to provide initial schedules to a completion/improvement heuristic. The term completion/improvement heuristic is used to describe a construction/ improvement heuristic operating on a starting schedule. On 80 test problems varying widely in scheduling flexibility, employee staffing requirements, and employee availability characteristics, all six LP-based procedures generated lower cost schedules than a comparison from-scratch construction/improvement heuristic. This heuristic, which perpetually maintains an explicit matching of employees to shifts, consists of three phases which add, drop, and modify shifts. In terms of schedule cost, schedule generation time, and model size, the procedures based on the implicit model performed better, as a group, than those based on the explicit model. The LP model with complete break placement flexibility and implicitly matching employees to shifts generated schedules costing 6.7% less than those developed by the from-scratch heuristic.