35 resultados para optimal power flow successive linear programming
Resumo:
We show that if performance measures in a stochastic scheduling problem satisfy a set of so-called partial conservation laws (PCL), which extend previously studied generalized conservation laws (GCL), then the problem is solved optimally by a priority-index policy for an appropriate range of linear performance objectives, where the optimal indices are computed by a one-pass adaptive-greedy algorithm, based on Klimov's. We further apply this framework to investigate the indexability property of restless bandits introduced by Whittle, obtaining the following results: (1) we identify a class of restless bandits (PCL-indexable) which are indexable; membership in this class is tested through a single run of the adaptive-greedy algorithm, which also computes the Whittle indices when the test is positive; this provides a tractable sufficient condition for indexability; (2) we further indentify the class of GCL-indexable bandits, which includes classical bandits, having the property that they are indexable under any linear reward objective. The analysis is based on the so-called achievable region method, as the results follow fromnew linear programming formulations for the problems investigated.
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.
Resumo:
Experimental results of a new controller able to support bidirectional power flow in a full-bridge rectifier with boost-like topology are obtained. The controller is computed using port Hamiltonian passivity techniques for a suitable generalized state space averaging truncation system, which transforms the control objectives, namely constant output voltage dc-bus and unity input power factor, into a regulation problem. Simulation results for the full system show the essential correctness of the simplifications introduced to obtain the controller, although some small experimental discrepancies point to several aspects that need further improvement.
Resumo:
In this article, a new technique for grooming low-speed traffic demands into high-speed optical routes is proposed. This enhancement allows a transparent wavelength-routing switch (WRS) to aggregate traffic en route over existing optical routes without incurring expensive optical-electrical-optical (OEO) conversions. This implies that: a) an optical route may be considered as having more than one ingress node (all inline) and, b) traffic demands can partially use optical routes to reach their destination. The proposed optical routes are named "lighttours" since the traffic originating from different sources can be forwarded together in a single optical route, i.e., as taking a "tour" over different sources towards the same destination. The possibility of creating lighttours is the consequence of a novel WRS architecture proposed in this article, named "enhanced grooming" (G+). The ability to groom more traffic in the middle of a lighttour is achieved with the support of a simple optical device named lambda-monitor (previously introduced in the RingO project). In this article, we present the new WRS architecture and its advantages. To compare the advantages of lighttours with respect to classical lightpaths, an integer linear programming (ILP) model is proposed for the well-known multilayer problem: traffic grooming, routing and wavelength assignment The ILP model may be used for several objectives. However, this article focuses on two objectives: maximizing the network throughput, and minimizing the number of optical-electro-optical conversions used. Experiments show that G+ can route all the traffic using only half of the total OEO conversions needed by classical grooming. An heuristic is also proposed, aiming at achieving near optimal results in polynomial time
Resumo:
We study markets where the characteristics or decisions of certain agents are relevant but not known to their trading partners. Assuming exclusive transactions, the environment is described as a continuum economy with indivisible commodities. We characterize incentive efficient allocations as solutions to linear programming problems and appeal to duality theory to demonstrate the generic existence of external effects in these markets. Because under certain conditions such effects may generate non-convexities, randomization emerges as a theoretic possibility. In characterizing market equilibria we show that, consistently with the personalized nature of transactions, prices are generally non-linear in the underlying consumption. On the other hand, external effects may have critical implications for market efficiency. With adverse selection, in fact, cross-subsidization across agents with different private information may be necessary for optimality, and so, the market need not even achieve an incentive efficient allocation. In contrast, for the case of a single commodity, we find that when informational asymmetries arise after the trading period (e.g. moral hazard; ex post hidden types) external effects are fully internalized at a market equilibrium.
Resumo:
L’objectiu d’aquest projecte que consisteix a elaborar un algoritme d’optimització que permeti, mitjançant un ajust de dades per mínims quadrats, la extracció dels paràmetres del circuit equivalent que composen el model teòric d’un ressonador FBAR, a partir de les mesures dels paràmetres S. Per a dur a terme aquest treball, es desenvolupa en primer lloc tota la teoria necessària de ressonadors FBAR. Començant pel funcionament i l’estructura, i mostrant especial interès en el modelat d’aquests ressonadors mitjançant els models de Mason, Butterworth Van-Dyke i BVD Modificat. En segon terme, s’estudia la teoria sobre optimització i programació No-Lineal. Un cop s’ha exposat la teoria, es procedeix a la descripció de l’algoritme implementat. Aquest algoritme utilitza una estratègia de múltiples passos que agilitzen l'extracció dels paràmetres del ressonador.
Resumo:
Over the last few years, ther has been a devolutionary tendency in many developed and developing countries. In this article we propose a methodology to decompose whether the benefits in terms of effciency derived from transfers of powers from higher to municipal levels of government "the "economic dividend" of devolution) might increase over time. This methodology is based on linear programming approaches for effciency measurement. We provide anapplication to Spanish municipalities, which have had to adapt to both the European Stability and Growth Pact as well as to domestic regulation seeking local governments balanced budget. Results indicate that efficiency gains from enhaced decentralization have increased over time. However, the way through which these gains accrue differs across municipalities -in some cases technical change is the main component, whereas in others catching up dominates.
Resumo:
This paper introduces the approach of using TURF analysis to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.
Resumo:
This paper aims to estimate a translog stochastic frontier production function in the analysis of a panel of 150 mixed Catalan farms in the period 1989-1993, in order to attempt to measure and explain variation in technical inefficiency scores with a one-stage approach. The model uses gross value added as the output aggregate measure. Total employment, fixed capital, current assets, specific costs and overhead costs are introduced into the model as inputs. Stochasticfrontier estimates are compared with those obtained using a linear programming method using a two-stage approach. The specification of the translog stochastic frontier model appears as an appropriate representation of the data, technical change was rejected and the technical inefficiency effects were statistically significant. The mean technical efficiency in the period analyzed was estimated to be 64.0%. Farm inefficiency levels were found significantly at 5%level and positively correlated with the number of economic size units.
Resumo:
Background: Optimization methods allow designing changes in a system so that specific goals are attained. These techniques are fundamental for metabolic engineering. However, they are not directly applicable for investigating the evolution of metabolic adaptation to environmental changes. Although biological systems have evolved by natural selection and result in well-adapted systems, we can hardly expect that actual metabolic processes are at the theoretical optimum that could result from an optimization analysis. More likely, natural systems are to be found in a feasible region compatible with global physiological requirements. Results: We first present a new method for globally optimizing nonlinear models of metabolic pathways that are based on the Generalized Mass Action (GMA) representation. The optimization task is posed as a nonconvex nonlinear programming (NLP) problem that is solved by an outer- approximation algorithm. This method relies on solving iteratively reduced NLP slave subproblems and mixed-integer linear programming (MILP) master problems that provide valid upper and lower bounds, respectively, on the global solution to the original NLP. The capabilities of this method are illustrated through its application to the anaerobic fermentation pathway in Saccharomyces cerevisiae. We next introduce a method to identify the feasibility parametric regions that allow a system to meet a set of physiological constraints that can be represented in mathematical terms through algebraic equations. This technique is based on applying the outer-approximation based algorithm iteratively over a reduced search space in order to identify regions that contain feasible solutions to the problem and discard others in which no feasible solution exists. As an example, we characterize the feasible enzyme activity changes that are compatible with an appropriate adaptive response of yeast Saccharomyces cerevisiae to heat shock Conclusion: Our results show the utility of the suggested approach for investigating the evolution of adaptive responses to environmental changes. The proposed method can be used in other important applications such as the evaluation of parameter changes that are compatible with health and disease states.
Resumo:
Global warming mitigation has recently become a priority worldwide. A large body of literature dealing with energy related problems has focused on reducing greenhouse gases emissions at an engineering scale. In contrast, the minimization of climate change at a wider macroeconomic level has so far received much less attention. We investigate here the issue of how to mitigate global warming by performing changes in an economy. To this end, we make use of a systematic tool that combines three methods: linear programming, environmentally extended input output models, and life cycle assessment principles. The problem of identifying key economic sectors that contribute significantly to global warming is posed in mathematical terms as a bi criteria linear program that seeks to optimize simultaneously the total economic output and the total life cycle CO2 emissions. We have applied this approach to the European Union economy, finding that significant reductions in global warming potential can be attained by regulating specific economic sectors. Our tool is intended to aid policymakers in the design of more effective public policies for achieving the environmental and economic targets sought.
Resumo:
La percepción del joven estudiante de economía es que la práctica con ejercicios es lo único que debe saber. Ésta percepción se puede cambiar con la Programación Lineal ya que unimos teoría y práctica y, al mismo tiempo, mejoramos la capacidad de modelar situaciones económicas y además, hacemos énfasis en el uso de las matemáticas como herramienta eficaz en la mejora de las actividades propias.
Resumo:
Wavelength division multiplexing (WDM) networks have been adopted as a near-future solution for the broadband Internet. In previous work we proposed a new architecture, named enhanced grooming (G+), that extends the capabilities of traditional optical routes (lightpaths). In this paper, we compare the operational expenditures incurred by routing a set of demands using lightpaths with that of lighttours. The comparison is done by solving an integer linear programming (ILP) problem based on a path formulation. Results show that, under the assumption of single-hop routing, almost 15% of the operational cost can be reduced with our architecture. In multi-hop routing the operation cost is reduced in 7.1% and at the same time the ratio of operational cost to number of optical-electro-optical conversions is reduced for our architecture. This means that ISPs could provide the same satisfaction in terms of delay to the end-user with a lower investment in the network architecture
Resumo:
In the context of autonomous sensors powered by small-size photovoltaic (PV) panels, this work analyses how the efficiency of DC/DC-converter-based power processing circuits can be improved by an appropriate selection of the inductor current that transfers the energy from the PV panel to a storage unit. Each component of power losses (fixed, conduction and switching losses) involved in the DC/DC converter specifically depends on the average inductor current so that there is an optimal value of this current that causes minimal losses and, hence, maximum efficiency. Such an idea has been tested experimentally using two commercial DC/DC converters whose average inductor current is adjustable. Experimental results show that the efficiency can be improved up to 12% by selecting an optimal value of that current, which is around 300-350 mA for such DC/DC converters.
Resumo:
The work in this paper deals with the development of momentum and thermal boundary layers when a power law fluid flows over a flat plate. At the plate we impose either constant temperature, constant flux or a Newton cooling condition. The problem is analysed using similarity solutions, integral momentum and energy equations and an approximation technique which is a form of the Heat Balance Integral Method. The fluid properties are assumed to be independent of temperature, hence the momentum equation uncouples from the thermal problem. We first derive the similarity equations for the velocity and present exact solutions for the case where the power law index n = 2. The similarity solutions are used to validate the new approximation method. This new technique is then applied to the thermal boundary layer, where a similarity solution can only be obtained for the case n = 1.