908 resultados para Environmental objective function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a duality theory for semi-infinite linear programming problems under uncertainty in the constraint functions, the objective function, or both, within the framework of robust optimization. We present robust duality by establishing strong duality between the robust counterpart of an uncertain semi-infinite linear program and the optimistic counterpart of its uncertain Lagrangian dual. We show that robust duality holds whenever a robust moment cone is closed and convex. We then establish that the closed-convex robust moment cone condition in the case of constraint-wise uncertainty is in fact necessary and sufficient for robust duality. In other words, the robust moment cone is closed and convex if and only if robust duality holds for every linear objective function of the program. In the case of uncertain problems with affinely parameterized data uncertainty, we establish that robust duality is easily satisfied under a Slater type constraint qualification. Consequently, we derive robust forms of the Farkas lemma for systems of uncertain semi-infinite linear inequalities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Our main goal is to compute or estimate the calmness modulus of the argmin mapping of linear semi-infinite optimization problems under canonical perturbations, i.e., perturbations of the objective function together with continuous perturbations of the right-hand side of the constraint system (with respect to an index ranging in a compact Hausdorff space). Specifically, we provide a lower bound on the calmness modulus for semi-infinite programs with unique optimal solution which turns out to be the exact modulus when the problem is finitely constrained. The relationship between the calmness of the argmin mapping and the same property for the (sub)level set mapping (with respect to the objective function), for semi-infinite programs and without requiring the uniqueness of the nominal solution, is explored, too, providing an upper bound on the calmness modulus of the argmin mapping. When confined to finitely constrained problems, we also provide a computable upper bound as it only relies on the nominal data and parameters, not involving elements in a neighborhood. Illustrative examples are provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematical programming can be used for the optimal design of shell-and-tube heat exchangers (STHEs). This paper proposes a mixed integer non-linear programming (MINLP) model for the design of STHEs, following rigorously the standards of the Tubular Exchanger Manufacturers Association (TEMA). Bell–Delaware Method is used for the shell-side calculations. This approach produces a large and non-convex model that cannot be solved to global optimality with the current state of the art solvers. Notwithstanding, it is proposed to perform a sequential optimization approach of partial objective targets through the division of the problem into sets of related equations that are easier to solve. For each one of these problems a heuristic objective function is selected based on the physical behavior of the problem. The global optimal solution of the original problem cannot be ensured even in the case in which each of the sub-problems is solved to global optimality, but at least a very good solution is always guaranteed. Three cases extracted from the literature were studied. The results showed that in all cases the values obtained using the proposed MINLP model containing multiple objective functions improved the values presented in the literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Remez penalty and smoothing algorithm (RPSALG) is a unified framework for penalty and smoothing methods for solving min-max convex semi-infinite programing problems, whose convergence was analyzed in a previous paper of three of the authors. In this paper we consider a partial implementation of RPSALG for solving ordinary convex semi-infinite programming problems. Each iteration of RPSALG involves two types of auxiliary optimization problems: the first one consists of obtaining an approximate solution of some discretized convex problem, while the second one requires to solve a non-convex optimization problem involving the parametric constraints as objective function with the parameter as variable. In this paper we tackle the latter problem with a variant of the cutting angle method called ECAM, a global optimization procedure for solving Lipschitz programming problems. We implement different variants of RPSALG which are compared with the unique publicly available SIP solver, NSIPS, on a battery of test problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to confidentiality considerations, the microdata available from the 2011 Spanish Census have been codified at a provincial (NUTS 3) level except when the municipal (LAU 2) population exceeds 20,000 inhabitants (a requirement that is met by less than 5% of all municipalities). For the remainder of the municipalities within a given province, information is only provided for their classification in wide population intervals. These limitations, hampering territorially-focused socio-economic analyses, and more specifically, those related to the labour market, are observed in many other countries. This article proposes and demonstrates an automatic procedure aimed at delineating a set of areas that meet such population requirements and that may be used to re-codify the geographic reference in these cases, thereby increasing the territorial detail at which individual information is available. The method aggregates municipalities into clusters based on the optimisation of a relevant objective function subject to a number of statistical constraints, and is implemented using evolutionary computation techniques. Clusters are defined to fit outer boundaries at the level of labour market areas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La traduction automatique statistique est un domaine très en demande et où les machines sont encore loin de produire des résultats de qualité humaine. La principale méthode utilisée est une traduction linéaire segment par segment d'une phrase, ce qui empêche de changer des parties de la phrase déjà traduites. La recherche pour ce mémoire se base sur l'approche utilisée dans Langlais, Patry et Gotti 2007, qui tente de corriger une traduction complétée en modifiant des segments suivant une fonction à optimiser. Dans un premier temps, l'exploration de nouveaux traits comme un modèle de langue inverse et un modèle de collocation amène une nouvelle dimension à la fonction à optimiser. Dans un second temps, l'utilisation de différentes métaheuristiques, comme les algorithmes gloutons et gloutons randomisés permet l'exploration plus en profondeur de l'espace de recherche et permet une plus grande amélioration de la fonction objectif.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La traduction automatique statistique est un domaine très en demande et où les machines sont encore loin de produire des résultats de qualité humaine. La principale méthode utilisée est une traduction linéaire segment par segment d'une phrase, ce qui empêche de changer des parties de la phrase déjà traduites. La recherche pour ce mémoire se base sur l'approche utilisée dans Langlais, Patry et Gotti 2007, qui tente de corriger une traduction complétée en modifiant des segments suivant une fonction à optimiser. Dans un premier temps, l'exploration de nouveaux traits comme un modèle de langue inverse et un modèle de collocation amène une nouvelle dimension à la fonction à optimiser. Dans un second temps, l'utilisation de différentes métaheuristiques, comme les algorithmes gloutons et gloutons randomisés permet l'exploration plus en profondeur de l'espace de recherche et permet une plus grande amélioration de la fonction objectif.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A generic method for the estimation of parameters for Stochastic Ordinary Differential Equations (SODEs) is introduced and developed. This algorithm, called the GePERs method, utilises a genetic optimisation algorithm to minimise a stochastic objective function based on the Kolmogorov-Smirnov statistic. Numerical simulations are utilised to form the KS statistic. Further, the examination of some of the factors that improve the precision of the estimates is conducted. This method is used to estimate parameters of diffusion equations and jump-diffusion equations. It is also applied to the problem of model selection for the Queensland electricity market. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present study examined accumulation of the metal toxins cadmium (Cd) and lead (Pb) in relation to the abundance of cytochrome P450 4F2 (CYP4F2), CYP2E1 and concentrations of zinc and copper in liver and kidney samples using immunoblotting coupled with metal analysis. The post mortem liver and kidney cortex samples were from 23 males and 8 females aged 3-89 years. All were Caucasians who had not been exposed to metals in the workplace. Average kidney cortex Cd load of 17.4 mu g/g w.w. was 17 times greater than average liver Cd load (1.1 mu g/g w.w.). In contrast, average kidney cortex Ph load of 0.09 mu g/g w.w. was two times lower than liver Pb load of 0.19 mu g/g w.w. Average Zn and Cu concentrations in the kidney cortex samples were 67% and 33% lower than those in the liver. Liver and kidney Cd loads, but not liver or kidney Ph loads, correlated positively with donors' age. After controlling for liver Cd load, an inverse correlation was seen between Zn and age (partial r = -0.39, P = 0.02), suggesting reduction in liver Zn levels in old age. Liver CYP2E1 protein abundance correlated with age-adjusted Cd load (partial r = 0.37, P = 0.02) whereas kidney CYP4172 protein abundance showed a positive correlation with age-adjusted Cd loads (partial r = 0.40, P = 0.02). These findings suggest that Cd may be an inducer of renal CYP4172 and hepatic CYP2E1 and that increased renal CYP4172 expression may implicate in Cd-linked renal tubular dysfunction and high blood pressure, involving CYP4F2-dependent arachidonic acid metabolism. (c) 2005 Elsevier Ireland Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, the cross-entropy method has been successfully applied to a wide range of discrete optimization tasks. In this paper we consider the cross-entropy method in the context of continuous optimization. We demonstrate the effectiveness of the cross-entropy method for solving difficult continuous multi-extremal optimization problems, including those with non-linear constraints.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new approach to optimisation is introduced based on a precise probabilistic statement of what is ideally required of an optimisation method. It is convenient to express the formalism in terms of the control of a stationary environment. This leads to an objective function for the controller which unifies the objectives of exploration and exploitation, thereby providing a quantitative principle for managing this trade-off. This is demonstrated using a variant of the multi-armed bandit problem. This approach opens new possibilities for optimisation algorithms, particularly by using neural network or other adaptive methods for the adaptive controller. It also opens possibilities for deepening understanding of existing methods. The realisation of these possibilities requires research into practical approximations of the exact formalism.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work follows a feasibility study (187) which suggested that a process for purifying wet-process phosphoric acid by solvent extraction should be economically viable. The work was divided into two main areas, (i) chemical and physical measurements on the three-phase system, with or without impurities; (ii) process simulation and optimization. The object was to test the process technically and economically and to optimise the type of solvent. The chemical equilibria and distribution curves for the system water - phosphoric acid - solvent for the solvents n-amyl alcohol, tri-n-butyl phosphate, di-isopropyl ether and methyl isobutyl ketone have been determined. Both pure phosphoric acid and acid containing known amounts of naturally occurring impurities (Fe P0 4 , A1P0 4 , Ca3(P04)Z and Mg 3(P0 4 )Z) were examined. The hydrodynamic characteristics of the systems were also studied. The experimental results obtained for drop size distribution were compared with those obtainable from Hinze's equation (32) and it was found that they deviated by an amount related to the turbulence. A comprehensive literature survey on the purification of wet-process phosphoric acid by organic solvents has been made. The literature regarding solvent extraction fundamentals and equipment and optimization methods for the envisaged process was also reviewed. A modified form of the Kremser-Brown and Souders equation to calculate the number of contact stages was derived. The modification takes into account the special nature of phosphoric acid distribution curves in the studied systems. The process flow-sheet was developed and simulated. Powell's direct search optimization method was selected in conjunction with the linear search algorithm of Davies, Swann and Campey. The objective function was defined as the total annual manufacturing cost and the program was employed to find the optimum operating conditions for anyone of the chosen solvents. The final results demonstrated the following order of feasibility to purify wet-process acid: di-isopropyl ether, methylisobutyl ketone, n-amyl alcohol and tri-n-butyl phosphate.