235 resultados para Linear system solve
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
A branch and bound (B& B) algorithm using the DC model, to solve the power system transmission expansion planning by incorporating the electrical losses in network modelling problem is presented. This is a mixed integer nonlinear programming (MINLP) problem, and in this approach, the so-called fathoming tests in the B&B algorithm were redefined and a nonlinear programming (NLP) problem is solved in each node of the B& B tree, using an interior-point method. Pseudocosts were used to manage the development of the B&B tree and to decrease its size and the processing time. There is no guarantee of convergence towards global optimisation for the MINLP problem. However, preliminary tests show that the algorithm easily converges towards the best-known solutions or to the optimal solutions for all the tested systems neglecting the electrical losses. When the electrical losses are taken into account, the solution obtained using the Garver system is better than the best one known in the literature.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The generation expansion planning (GEP) problem consists in determining the type of technology, size, location and time at which new generation units must be integrated to the system, over a given planning horizon, to satisfy the forecasted energy demand. Over the past few years, due to an increasing awareness of environmental issues, different approaches to solve the GEP problem have included some sort of environmental policy, typically based on emission constraints. This paper presents a linear model in a dynamic version to solve the GEP problem. The main difference between the proposed model and most of the works presented in the specialized literature is the way the environmental policy is envisaged. Such policy includes: i) the taxation of CO(2) emissions, ii) an annual Emissions Reduction Rate (ERR) in the overall system, and iii) the gradual retirement of old inefficient generation plants. The proposed model is applied in an 11-region to design the most cost-effective and sustainable 10-technology US energy portfolio for the next 20 years.
Resumo:
In this work, the behaviour of the system with N massive parallel rigid wires is analysed. The aim is to explore its resemblance to a system of multiple cosmic strings. Assuming that it behaves like a 'gas' of massive rigid wires, we use a thermodynamics approach to describe this system. We obtain a constraint relating the linear mass density of the massive wires, the number of the massive wires in the system and the dispersion velocity of the system. © 1996 IOP Publishing Ltd.
Resumo:
The increase of computing power of the microcomputers has stimulated the building of direct manipulation interfaces that allow graphical representation of Linear Programming (LP) models. This work discusses the components of such a graphical interface as the basis for a system to assist users in the process of formulating LP problems. In essence, this work proposes a methodology which considers the modelling task as divided into three stages which are specification of the Data Model, the Conceptual Model and the LP Model. The necessity for using Artificial Intelligence techniques in the problem conceptualisation and to help the model formulation task is illustrated.
Resumo:
A new methodology for soluble oxalic acid determination in grass samples was developed using a two enzyme reactor in an FIA system. The reactor consisted of 3 U of oxalate oxidase and 100 U of peroxidase immobilized on Sorghum vulgare seeds activated with glutaraldehyde. The carbon dioxide was monitored spectrophotometrically, after reacting with an acid-base indicator (Bromocresol Purple) after it permeated through a PTFE membrane. A linear response range was observed between 0.25 and 1.00mmol l-1 of oxalic acid; the data was fit by the equation A=-0.8(±1.5)+ 57.2(±2.5)[oxalate], with a correlation coefficient of 0.9971 and a relative standard deviation of 2% for n=5. The variance for a 0.25 mmol l-1 oxalic acid standard solution was lower than 4% for 11 measurements. The FIA system allows analysis of 20 samples per hour without prior treatment. The proposed method showed a good correlation with that of the Sigma Kit.
Resumo:
Piecewise-Linear Programming (PLP) is an important area of Mathematical Programming and concerns the minimisation of a convex separable piecewise-linear objective function, subject to linear constraints. In this paper a subarea of PLP called Network Piecewise-Linear Programming (NPLP) is explored. The paper presents four specialised algorithms for NPLP: (Strongly Feasible) Primal Simplex, Dual Method, Out-of-Kilter and (Strongly Polynomial) Cost-Scaling and their relative efficiency is studied. A statistically designed experiment is used to perform a computational comparison of the algorithms. The response variable observed in the experiment is the CPU time to solve randomly generated network piecewise-linear problems classified according to problem class (Transportation, Transshipment and Circulation), problem size, extent of capacitation, and number of breakpoints per arc. Results and conclusions on performance of the algorithms are reported.
Resumo:
This paper enhances some concepts of the Instantaneous Complex Power Theory by analyzing the analytical expressions for voltages, currents and powers developed on a symmetrical RL three-phase system, during the transient caused by a sinusoidal voltage excitation. The powers delivered to an ideal inductor will be interpreted, allowing a deep insight in the power phenomenon by analyzing the voltages in each element of the circuit. The results can be applied to the understanding of non-linear systems subject to sinusoidal voltage excitation and distorted currents.
Resumo:
A flow-injection (FI) method was developed for the determination of oxalate in urine. It was based on the use of oxalate oxidase (E.C. 1.2.3.4) immobilized on ground seeds of the BR-303 Sorghum vulgare variety. A reactor was filled with this activated material, and the samples (200 μL) containing oxalate were passed through it, carried by a deionized water flow. The carbon dioxide produced by the enzyme reaction permeated through a microporous PTFE membrane, and was received in a water acceptor stream, promoting conductivity changes proportional to the oxalate concentration in the sample. The results obtained showed a useful linear range from 0.05 to 0.50 mmol dm-3. The proposed method, when compared with the Sigma enzymatic procedure, showed good correlation (Y = 0.006(±0.016) + 0.98(±0.019)X; r = 0.9995, Y = conductivity in μS, and X = concentration in mmol dm-3), selectivity, and sensitivity. The new immobilization approach promotes greater stability, allowing oxalate determination for 6 months. About 13 determinations can be performed per hour. The precision of the proposed method is about ± 3.2 % (r.s.d).
Resumo:
A branch and bound algorithm is proposed to solve the H2-norm model reduction problem for continuous-time linear systems, with conditions assuring convergence to the global optimum in finite time. The lower and upper bounds used in the optimization procedure are obtained through Linear Matrix Inequalities formulations. Examples illustrate the results.
Resumo:
The problem of dynamic camera calibration considering moving objects in close range environments using straight lines as references is addressed. A mathematical model for the correspondence of a straight line in the object and image spaces is discussed. This model is based on the equivalence between the vector normal to the interpretation plane in the image space and the vector normal to the rotated interpretation plane in the object space. In order to solve the dynamic camera calibration, Kalman Filtering is applied; an iterative process based on the recursive property of the Kalman Filter is defined, using the sequentially estimated camera orientation parameters to feedback the feature extraction process in the image. For the dynamic case, e.g. an image sequence of a moving object, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and based on the system model parameters with good quality, for each instant of an image sequence. The proposed approach was tested with simulated and real data. Experiments with real data were carried out in a controlled environment, considering a sequence of images of a moving cube in a linear trajectory over a flat surface.
Resumo:
The photo-Fenton process using potassium ferrioxalate as a mediator in the photodegradation reaction of organochloride compounds in an aqueous medium was investigated. The influence of parameters such as hydrogen peroxide and ferrioxalate concentrations and initial pH, was evaluated using dichloroacetic acid (DCA) as a model compound under black-light lamp irradiation. An upflow annular photoreactor, operating in a single pass or recirculating mode was used during photodegradation experiments with artificial light. The extent of the release of chloride ions was used to evaluate the photodegradation reaction. The optimum pH range observed was 2.5-2.8. The efficiency of DCA dechlorination increased with increasing concentrations of H2O2 and potassium ferrioxalate, reaching a plateau after the addition of 6 and 1.5 mmol/L of those reagents, respectively. The total organic carbon (TOC) content in DCA and 2,4-dichlorophenol (DCP) solutions was compared with the chloride released after photodegradation. The influence of natural solar light intensity, measured at 365 nm, was evaluated for the dechlorination of DCA on typical summer's days showing a linear dependency. The photodegradation of DCA using black-light lamp and solar irradiation was compared.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A new strategy for minimization of Cu2+ and Pb2+ interferences on the spectrophotometric determination of Cd2+ by the Malachite green (MG)-iodide reaction using electrolytic deposition of interfering species and solid phase extraction of Cd2+ in flow system is proposed. The electrolytic cell comprises two coiled Pt electrodes concentrically assembled. When the sample solution is electrolyzed in a mixed solution containing 5% (v/v) HNO3, 0.1% (v/v) H2SO4 and 0.5 M NaCl, Cu2+ is deposited as Cu on the cathode, Pb2+ is deposited as PbO2 on the anode while Cd2+ is kept in solution. After electrolysis, the remaining solution passes through an AG1-X8 resin (chloride form) packed minicolumn in which Cd2+ is extracted as CdCl4/2-. Electrolyte compositions, flow rates, timing, applied current, and electrolysis time was investigated. With 60 s electrolysis time, 0.25 A applied current, Pb2+ and Cu2+ levels up to 50 and 250 mg 1-1, respectively, can be tolerated without interference. For 90 s resin loading time, a linear relationship between absorbance and analyte concentration in the 5.00-50.0 μg Cd 1-1 range (r2 = 0.9996) is obtained. A throughput of 20 samples per h is achieved, corresponding to about 0.7 mg MG and 500 mg KI and 5 ml sample consumed per determination. The detection limit is 0.23 μg Cd 1-1. The accuracy was checked for cadmium determination in standard reference materials, vegetables and tap water. Results were in agreement with certified values of standard reference materials and with those obtained by graphite furnace atomic absorption spectrometry at 95% confidence level. The R.S.D. for plant digests and water containing 13.0 μg Cd 1-1 was 3.85% (n = 12). The recoveries of analyte spikes added to the water and vegetable samples ranged from 94 to 104%. (C) 2000 Elsevier Science B.V.