133 resultados para Multi-objective optimization problem
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
There are strong uncertainties regarding LAI dynamics in forest ecosystems in response to climate change. While empirical growth & yield models (G&YMs) provide good estimations of tree growth at the stand level on a yearly to decennial scale, process-based models (PBMs) use LAI dynamics as a key variable for enabling the accurate prediction of tree growth over short time scales. Bridging the gap between PBMs and G&YMs could improve the prediction of forest growth and, therefore, carbon, water and nutrient fluxes by combining modeling approaches at the stand level.Our study aimed to estimate monthly changes of leaf area in response to climate variations from sparse measurements of foliage area and biomass. A leaf population probabilistic model (SLCD) was designed to simulate foliage renewal. The leaf population was distributed in monthly cohorts, and the total population size was limited depending on forest age and productivity. Foliage dynamics were driven by a foliation function and the probabilities ruling leaf aging or fall. Their formulation depends on the forest environment.The model was applied to three tree species growing under contrasting climates and soil types. In tropical Brazilian evergreen broadleaf eucalypt plantations, the phenology was described using 8 parameters. A multi-objective evolutionary algorithm method (MOEA) was used to fit the model parameters on litterfall and LAI data over an entire stand rotation. Field measurements from a second eucalypt stand were used to validate the model. Seasonal LAI changes were accurately rendered for both sites (R-2 = 0.898 adjustment, R-2 = 0.698 validation). Litterfall production was correctly simulated (R-2 = 0.562, R-2 = 0.4018 validation) and may be improved by using additional validation data in future work. In two French temperate deciduous forests (beech and oak), we adapted phenological sub-modules of the CASTANEA model to simulate canopy dynamics, and SLCD was validated using LAI measurements. The phenological patterns were simulated with good accuracy in the two cases studied. However, IA/max was not accurately simulated in the beech forest, and further improvement is required.Our probabilistic approach is expected to contribute to improving predictions of LAI dynamics. The model formalism is general and suitable to broadleaf forests for a large range of ecological conditions. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
In this work a Nonzero-Sum NASH game related to the H2 and H∞ control problems is formulated in the context of convex optimization theory. The variables of the game are limiting bounds for the H2 and H∞ norms, and the final controller is obtained as an equilibrium solution, which minimizes the `sensitivity of each norm' with respect to the other. The state feedback problem is considered and illustrated by numerical examples.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper tackles a Nurse Scheduling Problem which consists of generating work schedules for a set of nurses while considering their shift preferences and other requirements. The objective is to maximize the satisfaction of nurses' preferences and minimize the violation of soft constraints. This paper presents a new deterministic heuristic algorithm, called MAPA (multi-assignment problem-based algorithm), which is based on successive resolutions of the assignment problem. The algorithm has two phases: a constructive phase and an improvement phase. The constructive phase builds a full schedule by solving successive assignment problems, one for each day in the planning period. The improvement phase uses a couple of procedures that re-solve assignment problems to produce a better schedule. Given the deterministic nature of this algorithm, the same schedule is obtained each time that the algorithm is applied to the same problem instance. The performance of MAPA is benchmarked against published results for almost 250,000 instances from the NSPLib dataset. In most cases, particularly on large instances of the problem, the results produced by MAPA are better when compared to best-known solutions from the literature. The experiments reported here also show that the MAPA algorithm finds more feasible solutions compared with other algorithms in the literature, which suggest that this proposed approach is effective and robust. © 2013 Springer Science+Business Media New York.
Resumo:
Goal Programming (GP) is an important analytical approach devised to solve many realworld problems. The first GP model is known as Weighted Goal Programming (WGP). However, Multi-Choice Aspirations Level (MCAL) problems cannot be solved by current GP techniques. In this paper, we propose a Multi-Choice Mixed Integer Goal Programming model (MCMI-GP) for the aggregate production planning of a Brazilian sugar and ethanol milling company. The MC-MIGP model was based on traditional selection and process methods for the design of lots, representing the production system of sugar, alcohol, molasses and derivatives. The research covers decisions on the agricultural and cutting stages, sugarcane loading and transportation by suppliers and, especially, energy cogeneration decisions; that is, the choice of production process, including storage stages and distribution. The MCMIGP allows decision-makers to set multiple aspiration levels for their problems in which the more/higher, the better and the less/lower, the better in the aspiration levels are addressed. An application of the proposed model for real problems in a Brazilian sugar and ethanol mill was conducted; producing interesting results that are herein reported and commented upon. Also, it was made a comparison between MCMI GP and WGP models using these real cases. © 2013 Elsevier Inc.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The study of robust design methodologies and techniques has become a new topical area in design optimizations in nearly all engineering and applied science disciplines in the last 10 years due to inevitable and unavoidable imprecision or uncertainty which is existed in real word design problems. To develop a fast optimizer for robust designs, a methodology based on polynomial chaos and tabu search algorithm is proposed. In the methodology, the polynomial chaos is employed as a stochastic response surface model of the objective function to efficiently evaluate the robust performance parameter while a mechanism to assign expected fitness only to promising solutions is introduced in tabu search algorithm to minimize the requirement for determining robust metrics of intermediate solutions. The proposed methodology is applied to the robust design of a practical inverse problem with satisfactory results.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Aggregation disaggregation is used to reduce the analysis of a large generalized transportation problem to a smaller one. Bounds for the actual difference between the aggregated objective and the original optimal value are used to quantify the error due to aggregation and estimate the quality of the aggregation. The bounds can be calculated either before optimization of the aggregated problem (a priori) or after (a posteriori). Both types of the bounds are derived and numerically compared. A computational experiment was designed to (a) study the correlation between the bounds and the actual error and (b) quantify the difference of the error bounds from the actual error. The experiment shows a significant correlation between some a priori bounds, the a posteriori bounds and the actual error. These preliminary results indicate that calculating the a priori error bound is a useful strategy to select the appropriate aggregation level, since the a priori bound varies in the same way that the actual error does. After the aggregated problem has been selected and optimized, the a posteriori bound provides a good quantitative measure for the error due to aggregation.
Resumo:
This paper applies two methods of mathematical decomposition to carry out an optimal reactive power flow (ORPF) in a coordinated decentralized way in the context of an interconnected multi-area power system. The first method is based on an augmented Lagrangian approach using the auxiliary problem principle (APP). The second method uses a decomposition technique based on the Karush-Kuhn-Tucker (KKT) first-order optimality conditions. The viability of each method to be used in the decomposition of multi-area ORPF is studied and the corresponding mathematical models are presented. The IEEE RTS-96, the IEEE 118-bus test systems and a 9-bus didactic system are used in order to show the operation and effectiveness of the decomposition methods.
Resumo:
Analog networks for solving convex nonlinear unconstrained programming problems without using gradient information of the objective function are proposed. The one-dimensional net can be used as a building block in multi-dimensional networks for optimizing objective functions of several variables.
Resumo:
A branch and bound algorithm is proposed to solve the H2-norm model reduction problem for continuous-time linear systems, with conditions assuring convergence to the global optimum in finite time. The lower and upper bounds used in the optimization procedure are obtained through Linear Matrix Inequalities formulations. Examples illustrate the results.