860 resultados para Mixed-integer linear programming
Resumo:
El presente trabajo de titulación denominado Texto Guía para Docentes enfocado en el bloque de Matemáticas Discretas del Primero B.G.U, ha sido desarrollado con la finalidad de presentar un aporte significativoy de ayuda al docente de Matemáticas de Primero de Bachillerato, anhelando un mejor desenvolvimiento dentro del aula de clase. Este documento está elaborado en base a la legislación educativa ecuatoriana vigente y de los documentos oficiales del Ministerio de Educación, el tema propuesto corresponde al tercer bloque curricular del primer año de Bachillerato General Unificado en la asignatura de Matemáticas. Nuestro trabajo de titulación se compone de tres capítulos. En el capítulo uno, se presenta una síntesis de temas como la evolución de la educación ecuatoriana, los modelos pedagógicos, los métodos de enseñanza, didáctica de la matemática y programación lineal, considerados como base para el desarrollo de la propuesta. En el capítulo dos, se detalla la investigación estadística realizada mediante una encuesta aplicada a docentes de Matemáticas de Primer año de Bachillerato, pertenecientes a la Coordinación Zonal 6 de Educación, Distrito Norte. Los resultados encontrados cimentaron la propuesta de la implementación del texto guía para el aprendizaje de Matemáticas Discretas. En el capítulo tres se elabora la propuesta del texto guía, estructurado en seis guías didácticas, cada una corresponde al desarrollo de una destreza con criterio de desempeñopara el tema planteado. Al final de este capítulo, se detallan conclusiones y recomendaciones dirigidas para el docente de matemáticas.
Resumo:
This paper deals with the problem of coordinated trading of wind and photovoltaic systems in order to find the optimal bid to submit in a pool-based electricity market. The coordination of wind and photovoltaic systems presents uncertainties not only due to electricity market prices, but also with wind and photovoltaic power forecast. Electricity markets are characterized by financial penalties in case of deficit or excess of generation. So, the aim o this work is to reduce these financial penalties and maximize the expected profit of the power producer. The problem is formulated as a stochastic linear programming problem. The proposed approach is validated with real data of pool-based electricity market of Iberian Peninsula.
Resumo:
The variability in non-dispatchable power generation raises important challenges to the integration of renewable energy sources into the electricity power grid. This paper provides the coordinated trading of wind and photovoltaic energy to mitigate risks due to the wind and solar power variability, electricity prices, and financial penalties arising out the generation shortfall and surplus. The problem of wind-photovoltaic coordinated trading is formulated as a linear programming problem. The goal is to obtain the optimal bidding strategy that maximizes the total profit. The wind-photovoltaic coordinated operation is modeled and compared with the uncoordinated operation. A comparison of the models and relevant conclusions are drawn from an illustrative case study of the Iberian day-ahead electricity market.
Resumo:
The variability in non-dispatchable power generation raises important challenges to the integration of renewable energy sources into the electricity power grid. This paper provides the coordinated trading of wind and photovoltaic energy assisted by a cyber-physical system for supporting management decisions to mitigate risks due to the wind and solar power variability, electricity prices, and financial penalties arising out the generation shortfall and surplus. The problem of wind-photovoltaic coordinated trading is formulated as a stochastic linear programming problem. The goal is to obtain the optimal bidding strategy that maximizes the total profit. The wind-photovoltaic coordinated operation is modelled and compared with the uncoordinated operation. A comparison of the models and relevant conclusions are drawn from an illustrative case study of the Iberian day-ahead electricity market.
Resumo:
This paper presents a computer application for wind energy bidding in a day-ahead electricity market to better accommodate the variability of the energy source. The computer application is based in a stochastic linear mathematical programming problem. The goal is to obtain the optimal bidding strategy in order to maximize the revenue. Electricity prices and financial penalties for shortfall or surplus energy deliver are modeled. Finally, conclusions are drawn from an illustrative case study, using data from the day-ahead electricity market of the Iberian Peninsula.
Resumo:
O aumento da pressão sobre os recursos hídricos tem levado muitos países a reconsiderarem os mecanismos utilizados na indução do uso eficiente da água, especialmente na agricultura irrigada. Estabelecer o preço correto da água é um dos mecanismos de tornar mais eficiente a alocação da água. O presente trabalho tem como objetivo a análise dos impactes económicos, sociais e ambientais de políticas de preço da água. A metodologia utilizada foi a Programação Linear, aplicada ao Perímetro Irrigado do Vale de Caxito, Província do Bengo, a 45 km de Luanda, que tem como fonte o rio Dande. Foram testados três cenários relativos a políticas de tarifação de água: tarifa volumétrica simples, tarifa volumétrica variável, e tarifa fixa por superfície. As principais conclusões mostram que, do ponto de vista do uso eficiente da água na agricultura, os melhores resultados obtêm-se com a tarifa volumétrica variável; do ponto de vista social, a tarifação volumétrica simples apresenta os melhores resultados; o método de tarifa volumétrica variável foi o mais penalizador, reduzindo rapidamente a área das culturas mais consumidoras de água, sendo o melhor do ponto de vista ambiental. Qualquer um dos métodos traz aspetos negativos relativamente à redução da margem bruta total. Palavras-chaves: Recursos hídricos; Preço da água; Programação linear. Abstract: Increased pressure on water resources has led many countries to reconsider the mechanisms used in the induction of efficient water use, especially for irrigated agriculture, a major consumer of water. Establishing the correct price of water is one of the mechanisms for more efficient allocation of water. This paper aims to analyze the economic, social and essenenvironmental impacts of water price policies. The methodology used is the linear programming, applied to the Irrigated Valley Caxito, in Bengo Province, 45 kilometers from Luanda, which has the river Dande as its source. Three scenarios concerning water price policies were tested: simple volumetric rate, variable volumetric rate and flat rate per surface. The main findings show that from the point of view of the efficient use of water in agriculture, the best results are obtained with variable volumetric rate; from the social point of view, the simple volumetric rate has the best results; the volume variable rate method proved to be the most penalizing, quickly reducing the area of most water consuming cultures, being the method in which the environmental objectives would be more readily achieved. Either methods bring negative aspects in relation to the reduction of total gross margin. Key-words: Water resources; Water price; Linear programming.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
An algorithm that uses integer arithmetic is suggested. It transforms anm ×n matrix to a diagonal form (of the structure of Smith Normal Form). Then it computes a reflexive generalized inverse of the matrix exactly and hence solves a system of linear equations error-free.
Resumo:
In this paper, a dual of a given linear fractional program is defined and the weak, direct and converse duality theorems are proved. Both the primal and the dual are linear fractional programs. This duality theory leads to necessary and sufficient conditions for the optimality of a given feasible solution. A unmerical example is presented to illustrate the theory in this connection. The equivalence of Charnes and Cooper dual and Dinkelbach’s parametric dual of a linear fractional program is also established.
Resumo:
A method to obtain a nonnegative integral solution of a system of linear equations, if such a solution exists is given. The method writes linear equations as an integer programming problem and then solves the problem using a combination of artificial basis technique and a method of integer forms.
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.
Resumo:
In this paper we introduce four scenario Cluster based Lagrangian Decomposition (CLD) procedures for obtaining strong lower bounds to the (optimal) solution value of two-stage stochastic mixed 0-1 problems. At each iteration of the Lagrangian based procedures, the traditional aim consists of obtaining the solution value of the corresponding Lagrangian dual via solving scenario submodels once the nonanticipativity constraints have been dualized. Instead of considering a splitting variable representation over the set of scenarios, we propose to decompose the model into a set of scenario clusters. We compare the computational performance of the four Lagrange multiplier updating procedures, namely the Subgradient Method, the Volume Algorithm, the Progressive Hedging Algorithm and the Dynamic Constrained Cutting Plane scheme for different numbers of scenario clusters and different dimensions of the original problem. Our computational experience shows that the CLD bound and its computational effort depend on the number of scenario clusters to consider. In any case, our results show that the CLD procedures outperform the traditional LD scheme for single scenarios both in the quality of the bounds and computational effort. All the procedures have been implemented in a C++ experimental code. A broad computational experience is reported on a test of randomly generated instances by using the MIP solvers COIN-OR and CPLEX for the auxiliary mixed 0-1 cluster submodels, this last solver within the open source engine COIN-OR. We also give computational evidence of the model tightening effect that the preprocessing techniques, cut generation and appending and parallel computing tools have in stochastic integer optimization. Finally, we have observed that the plain use of both solvers does not provide the optimal solution of the instances included in the testbed with which we have experimented but for two toy instances in affordable elapsed time. On the other hand the proposed procedures provide strong lower bounds (or the same solution value) in a considerably shorter elapsed time for the quasi-optimal solution obtained by other means for the original stochastic problem.