994 resultados para Chance-constrained programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deterministic Optimal Reactive Power Dispatch problem has been extensively studied, such that the demand power and the availability of shunt reactive power compensators are known and fixed. Give this background, a two-stage stochastic optimization model is first formulated under the presumption that the load demand can be modeled as specified random parameters. A second stochastic chance-constrained model is presented considering uncertainty on the demand and the equivalent availability of shunt reactive power compensators. Simulations on six-bus and 30-bus test systems are used to illustrate the validity and essential features of the proposed models. This simulations shows that the proposed models can prevent to the power system operator about of the deficit of reactive power in the power system and suggest that shunt reactive sourses must be dispatched against the unavailability of any reactive source. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a chance-constrained linear programming formulation for reservoir operation of a multipurpose reservoir. The release policy is defined by a chance constraint that the probability of irrigation release in any period equalling or exceeding the irrigation demand is at least equal to a specified value P (called reliability level). The model determines the maximum annual hydropower produced while meeting the irrigation demand at a specified reliability level. The model considers variation in reservoir water level elevation and also the operating range within which the turbine operates. A linear approximation for nonlinear power production function is assumed and the solution obtained within a specified tolerance limit. The inflow into the reservoir is considered random. The chance constraint is converted into its deterministic equivalent using a linear decision rule and inflow probability distribution. The model application is demonstrated through a case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a mixed-integer quadratically-constrained programming (MIQCP) model to solve the distribution system expansion planning (DSEP) problem. The DSEP model considers the construction/reinforcement of substations, the construction/reconductoring of circuits, the allocation of fixed capacitors banks and the radial topology modification. As the DSEP problem is a very complex mixed-integer non-linear programming problem, it is convenient to reformulate it like a MIQCP problem; it is demonstrated that the proposed formulation represents the steady-state operation of a radial distribution system. The proposed MIQCP model is a convex formulation, which allows to find the optimal solution using optimization solvers. Test systems of 23 and 54 nodes and one real distribution system of 136 nodes were used to show the efficiency of the proposed model in comparison with other DSEP models available in the specialized literature. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some uncertainties such as the stochastic input/output power of a plug-in electric vehicle due to its stochastic charging and discharging schedule, that of a wind unit and that of a photovoltaic generation source, volatile fuel prices and future uncertain load growth, all together could lead to some risks in determining the optimal siting and sizing of distributed generators (DGs) in distributed systems. Given this background, under the chance constrained programming (CCP) framework, a new method is presented to handle these uncertainties in the optimal sitting and sizing problem of DGs. First, a mathematical model of CCP is developed with the minimization of DGs investment cost, operational cost and maintenance cost as well as the network loss cost as the objective, security limitations as constraints, the sitting and sizing of DGs as optimization variables. Then, a Monte Carolo simulation embedded genetic algorithm approach is developed to solve the developed CCP model. Finally, the IEEE 37-node test feeder is employed to verify the feasibility and effectiveness of the developed model and method. This work is supported by an Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) Project on Intelligent Grids Under the Energy Transformed Flagship, and Project from Jiangxi Power Company.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops a dynamic model for cost-effective selection of sites for restoring biodiversity when habitat quality develops over time and is uncertain. A safety-first decision criterion is used for ensuring a minimum level of habitats, and this is formulated in a chance-constrained programming framework. The theoretical results show; (i) inclusion of quality growth reduces overall cost for achieving a future biodiversity target from relatively early establishment of habitats, but (ii) consideration of uncertainty in growth increases total cost and delays establishment, and (iii) cost-effective trading of habitat requires exchange rate between sites that varies over time. An empirical application to the red listed umbrella species - white-backed woodpecker - shows that the total cost of achieving habitat targets specified in the Swedish recovery plan is doubled if the target is to be achieved with high reliability, and that equilibrating price on a habitat trading market differs considerably between different quality growth combinations. © 2013 Elsevier GmbH.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deployment of bioenergy technologies is a key part of UK and European renewable energy policy. A key barrier to the deployment of bioenergy technologies is the management of biomass supply chains including the evaluation of suppliers and the contracting of biomass. In the undeveloped biomass for energy market buyers of biomass are faced with three major challenges during the development of new bioenergy projects. What characteristics will a certain supply of biomass have, how to evaluate biomass suppliers and which suppliers to contract with in order to provide a portfolio of suppliers that best satisfies the needs of the project and its stakeholder group whilst also satisfying crisp and non-crisp technological constraints. The problem description is taken from the situation faced by the industrial partner in this research, Express Energy Ltd. This research tackles these three areas separately then combines them to form a decision framework to assist biomass buyers with the strategic sourcing of biomass. The BioSS framework. The BioSS framework consists of three modes which mirror the development stages of bioenergy projects. BioSS.2 mode for early stage development, BioSS.3 mode for financial close stage and BioSS.Op for the operational phase of the project. BioSS is formed of a fuels library, a supplier evaluation module and an order allocation module, a Monte-Carlo analysis module is also included to evaluate the accuracy of the recommended portfolios. In each mode BioSS can recommend which suppliers should be contracted with and how much material should be purchased from each. The recommended blend should have chemical characteristics within the technological constraints of the conversion technology and also best satisfy the stakeholder group. The fuels library is made up from a wide variety of sources and contains around 100 unique descriptions of potential biomass sources that a developer may encounter. The library takes a wide data collection approach and has the aim of allowing for estimates to be made of biomass characteristics without expensive and time consuming testing. The supplier evaluation part of BioSS uses a QFD-AHP method to give importance weightings to 27 different evaluating criteria. The evaluating criteria have been compiled from interviews with stakeholders and policy and position documents and the weightings have been assigned using a mixture of workshops and expert interview. The weighted importance scores allow potential suppliers to better tailor their business offering and provides a robust framework for decision makers to better understand the requirements of the bioenergy project stakeholder groups. The order allocation part of BioSS uses a chance-constrained programming approach to assign orders of material between potential suppliers based on the chemical characteristics of those suppliers and the preference score of those suppliers. The optimisation program finds the portfolio of orders to allocate to suppliers to give the highest performance portfolio in the eyes of the stakeholder group whilst also complying with technological constraints. The technological constraints can be breached if the decision maker requires by setting the constraint as a chance-constraint. This allows a wider range of biomass sources to be procured and allows a greater overall performance to be realised than considering crisp constraints or using deterministic programming approaches. BioSS is demonstrated against two scenarios faced by UK bioenergy developers. The first is a large scale combustion power project, the second a small scale gasification project. The Bioss is applied in each mode for both scenarios and is shown to adapt the solution to the stakeholder group importance and the different constraints of the different conversion technologies whilst finding a globally optimal portfolio for stakeholder satisfaction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Les centres d’appels sont des éléments clés de presque n’importe quelle grande organisation. Le problème de gestion du travail a reçu beaucoup d’attention dans la littérature. Une formulation typique se base sur des mesures de performance sur un horizon infini, et le problème d’affectation d’agents est habituellement résolu en combinant des méthodes d’optimisation et de simulation. Dans cette thèse, nous considérons un problème d’affection d’agents pour des centres d’appels soumis a des contraintes en probabilité. Nous introduisons une formulation qui exige que les contraintes de qualité de service (QoS) soient satisfaites avec une forte probabilité, et définissons une approximation de ce problème par moyenne échantillonnale dans un cadre de compétences multiples. Nous établissons la convergence de la solution du problème approximatif vers celle du problème initial quand la taille de l’échantillon croit. Pour le cas particulier où tous les agents ont toutes les compétences (un seul groupe d’agents), nous concevons trois méthodes d’optimisation basées sur la simulation pour le problème de moyenne échantillonnale. Étant donné un niveau initial de personnel, nous augmentons le nombre d’agents pour les périodes où les contraintes sont violées, et nous diminuons le nombre d’agents pour les périodes telles que les contraintes soient toujours satisfaites après cette réduction. Des expériences numériques sont menées sur plusieurs modèles de centre d’appels à faible occupation, au cours desquelles les algorithmes donnent de bonnes solutions, i.e. la plupart des contraintes en probabilité sont satisfaites, et nous ne pouvons pas réduire le personnel dans une période donnée sont introduire de violation de contraintes. Un avantage de ces algorithmes, par rapport à d’autres méthodes, est la facilité d’implémentation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the problem of uncertainty in the entries of the Kernel matrix, arising in SVM formulation. Using Chance Constraint Programming and a novel large deviation inequality we derive a formulation which is robust to such noise. The resulting formulation applies when the noise is Gaussian, or has finite support. The formulation in general is non-convex, but in several cases of interest it reduces to a convex program. The problem of uncertainty in kernel matrix is motivated from the real world problem of classifying proteins when the structures are provided with some uncertainty. The formulation derived here naturally incorporates such uncertainty in a principled manner leading to significant improvements over the state of the art. 1.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we show that it is possible to reduce the complexity of Intra MB coding in H.264/AVC based on a novel chance constrained classifier. Using the pairs of simple mean-variances values, our technique is able to reduce the complexity of Intra MB coding process with a negligible loss in PSNR. We present an alternate approach to address the classification problem which is equivalent to machine learning. Implementation results show that the proposed method reduces encoding time to about 20% of the reference implementation with average loss of 0.05 dB in PSNR.