202 resultados para optimization under uncertainty


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents new insights and novel algorithms for strategy selection in sequential decision making with partially ordered preferences; that is, where some strategies may be incomparable with respect to expected utility. We assume that incomparability amongst strategies is caused by indeterminacy/imprecision in probability values. We investigate six criteria for consequentialist strategy selection: Gamma-Maximin, Gamma-Maximax, Gamma-Maximix, Interval Dominance, Maximality and E-admissibility. We focus on the popular decision tree and influence diagram representations. Algorithms resort to linear/multilinear programming; we describe implementation and experiments. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper presents results of research related to multicriteria decision making under information uncertainty. The Bell-man-Zadeh approach to decision making in a fuzzy environment is utilized for analyzing multicriteria optimization models (< X, M > models) under deterministic information. Its application conforms to the principle of guaranteed result and provides constructive lines in obtaining harmonious solutions on the basis of analyzing associated maxmin problems. This circumstance permits one to generalize the classic approach to considering the uncertainty of quantitative information (based on constructing and analyzing payoff matrices reflecting effects which can be obtained for different combinations of solution alternatives and the so-called states of nature) in monocriteria decision making to multicriteria problems. Considering that the uncertainty of information can produce considerable decision uncertainty regions, the resolving capacity of this generalization does not always permit one to obtain unique solutions. Taking this into account, a proposed general scheme of multicriteria decision making under information uncertainty also includes the construction and analysis of the so-called < X, R > models (which contain fuzzy preference relations as criteria of optimality) as a means for the subsequent contraction of the decision uncertainty regions. The paper results are of a universal character and are illustrated by a simple example. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The structural engineering community in Brazil faces new challenges with the recent occurrence of high intensity tornados. Satellite surveillance data shows that the area covering the south-east of Brazil, Uruguay and some of Argentina is one of the world most tornado-prone areas, second only to the infamous tornado alley in central United States. The design of structures subject to tornado winds is a typical example of decision making in the presence of uncertainty. Structural design involves finding a good balance between the competing goals of safety and economy. This paper presents a methodology to find the optimum balance between these goals in the presence of uncertainty. In this paper, reliability-based risk optimization is used to find the optimal safety coefficient that minimizes the total expected cost of a steel frame communications tower, subject to extreme storm and tornado wind loads. The technique is not new, but it is applied to a practical problem of increasing interest to Brazilian structural engineers. The problem is formulated in the partial safety factor format used in current design codes, with all additional partial factor introduced to serve as optimization variable. The expected cost of failure (or risk) is defined as the product of a. limit state exceedance probability by a limit state exceedance cost. These costs include costs of repairing, rebuilding, and paying compensation for injury and loss of life. The total expected failure cost is the sum of individual expected costs over all failure modes. The steel frame communications, tower subject of this study has become very common in Brazil due to increasing mobile phone coverage. The study shows that optimum reliability is strongly dependent on the cost (or consequences) of failure. Since failure consequences depend oil actual tower location, it turn,,; out that different optimum designs should be used in different locations. Failure consequences are also different for the different parties involved in the design, construction and operation of the tower. Hence, it is important that risk is well understood by the parties involved, so that proper contracts call be made. The investigation shows that when non-structural terms dominate design costs (e.g, in residential or office buildings) it is not too costly to over-design; this observation is in agreement with the observed practice for non-optimized structural systems. In this situation, is much easier to loose money by under-design. When by under-design. When structural material cost is a significant part of design cost (e.g. concrete dam or bridge), one is likely to lose significantmoney by over-design. In this situation, a cost-risk-benefit optimization analysis is highly recommended. Finally, the study also shows that under time-varying loads like tornados, the optimum reliability is strongly dependent on the selected design life.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present paper proposes a flexible consensus scheme for group decision making, which allows one to obtain a consistent collective opinion, from information provided by each expert in terms of multigranular fuzzy estimates. It is based on a linguistic hierarchical model with multigranular sets of linguistic terms, and the choice of the most suitable set is a prerogative of each expert. From the human viewpoint, using such model is advantageous, since it permits each expert to utilize linguistic terms that reflect more adequately the level of uncertainty intrinsic to his evaluation. From the operational viewpoint, the advantage of using such model lies in the fact that it allows one to express the linguistic information in a unique domain, without losses of information, during the discussion process. The proposed consensus scheme supposes that the moderator can interfere in the discussion process in different ways. The intervention can be a request to any expert to update his opinion or can be the adjustment of the weight of each expert`s opinion. An optimal adjustment can be achieved through the execution of an optimization procedure that searches for the weights that maximize a corresponding soft consensus index. In order to demonstrate the usefulness of the presented consensus scheme, a technique for multicriteria analysis, based on fuzzy preference relation modeling, is utilized for solving a hypothetical enterprise strategy planning problem, generated with the use of the Balanced Scorecard methodology. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses the one-dimensional cutting stock problem when demand is a random variable. The problem is formulated as a two-stage stochastic nonlinear program with recourse. The first stage decision variables are the number of objects to be cut according to a cutting pattern. The second stage decision variables are the number of holding or backordering items due to the decisions made in the first stage. The problem`s objective is to minimize the total expected cost incurred in both stages, due to waste and holding or backordering penalties. A Simplex-based method with column generation is proposed for solving a linear relaxation of the resulting optimization problem. The proposed method is evaluated by using two well-known measures of uncertainty effects in stochastic programming: the value of stochastic solution-VSS-and the expected value of perfect information-EVPI. The optimal two-stage solution is shown to be more effective than the alternative wait-and-see and expected value approaches, even under small variations in the parameters of the problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blends of milk fat and canola oil (MF:CNO) were enzymatically interesterified (EIE) by Rhizopus oryzne lipase immobilized on polysiloxane-polyvinyl alcohol (SiO(2)-PVA) composite, in a solvent-free system. A central composite design (CCD) was used to optimize the reaction, considering the effects of different mass fractions of binary blends of MF:CNO (50:50, 65:35 and 80:20) and temperatures (45, 55 and 65 degrees C) on the composition and texture properties of the interesterified products, taking the interesterification degree (ID) and consistency (at 10 degrees C) as response variables. For the ID variable both mass fraction of milk fat in the blend and temperature were found to be significant, while for the consistency only mass fraction of milk fat was significant. Empiric models for ID and consistency were obtained that allowed establishing the best interesterification conditions: blend with 65 % of milk fat and 35 %, of canola oil, and temperature of 45 degrees C. Under these conditions, the ID was 19.77 %) and the consistency at 10 degrees C was 56 290 Pa. The potential of this eco-friendly process demonstrated that a product could be obtained with the desirable milk fat flavour and better spreadability under refrigerated conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Mites (Acari) have traditionally been treated as monophyletic, albeit composed of two major lineages: Acariformes and Parasitiformes. Yet recent studies based on morphology, molecular data, or combinations thereof, have increasingly drawn their monophyly into question. Furthermore, the usually basal (molecular) position of one or both mite lineages among the chelicerates is in conflict to their morphology, and to the widely accepted view that mites are close relatives of Ricinulei. Results: The phylogenetic position of the acariform mites is examined through employing SSU, partial LSU sequences, and morphology from 91 chelicerate extant terminals (forty Acariformes). In a static homology framework, molecular sequences were aligned using their secondary structure as guide, whereby regions of ambiguous alignment were discarded, and pre-aligned sequences analyzed under parsimony and different mixed models in a Bayesian inference. Parsimony and Bayesian analyses led to trees largely congruent concerning infraordinal, well-supported branches, but with low support for inter-ordinal relationships. An exception is Solifugae + Acariformes (P. P = 100%, J. = 0.91). In a dynamic homology framework, two analyses were run: a standard POY analysis and an analysis constrained by secondary structure. Both analyses led to largely congruent trees; supporting a (Palpigradi (Solifugae Acariformes)) clade and Ricinulei as sister group of Tetrapulmonata with the topology (Ricinulei (Amblypygi (Uropygi Araneae))). Combined analysis with two different morphological data matrices were run in order to evaluate the impact of constraining the analysis on the recovered topology when employing secondary structure as a guide for homology establishment. The constrained combined analysis yielded two topologies similar to the exclusively molecular analysis for both morphological matrices, except for the recovery of Pedipalpi instead of the (Uropygi Araneae) clade. The standard (direct optimization) POY analysis, however, led to the recovery of trees differing in the absence of the otherwise well-supported group Solifugae + Acariformes. Conclusions: Previous studies combining ribosomal sequences and morphology often recovered topologies similar to purely morphological analyses of Chelicerata. The apparent stability of certain clades not recovered here, like Haplocnemata and Acari, is regarded as a byproduct of the way the molecular homology was previously established using the instrumentalist approach implemented in POY. Constraining the analysis by a priori homology assessment is defended here as a way of maintaining the severity of the test when adding new data to the analysis. Although the strength of the method advocated here is keeping phylogenetic information from regions usually discarded in an exclusively static homology framework; it still has the inconvenience of being uninformative on the effect of alignment ambiguity on resampling methods of clade support estimation. Finally, putative morphological apomorphies of Solifugae + Acariformes are the reduction of the proximal cheliceral podomere, medial abutting of the leg coxae, loss of sperm nuclear membrane, and presence of differentiated germinative and secretory regions in the testis delivering their products into a common lumen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MgB(2) is considered to be an important conductor for applications. Optimizing flux pinning in these conductors can improve their critical currents. Doping can influence flux pinning efficiency and grain connectivity, and also affect the resistivity, upper critical field and critical temperature. This study was designed to attempt the doping of MgB(2) on the Mg sites with metal-diborides using high-energy ball milling. MgB(2) samples were prepared by milling pre-reacted MgB(2) and TaB(2) powders using a Spex 8000M mill with WC jars and balls in a nitrogen-filled glove box. The mixing concentration in (Mg(1-x)Ta(x))B(2) was up to x = 0.10. Samples were removed from the WC jars after milling times up to 4000 minutes and formed into pellets using cold isostatic pressing. The pellets were heat treated in a hot isostatic press (HIP) at 1000 degrees C under a pressure of 30 kpsi for 24 hours. The influence that milling time and TaB(2) addition had on the microstructure and the resulting superconducting properties of TaB(2)-added MgB(2) is discussed. Improvement J(c) of at high magnetic fields and of pinning could be obtained in milled samples with added TaB(2) The sample with added 5at.% TaB(2) and milled for 300 minutes showed values of J(c) similar to 7 x 10(5) A/cm(2) and F(p) similar to 14 GN/m(3) at 2T, 4.2 K. The milled and TaB(2)-mixed samples showed higher values of mu(0)H(irr) than the unmilled-unmixed sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work characterizes the analog performance of SOI n-MuGFETs with HfSiO gate dielectric and TiN metal gate with respect to the influence of the high-k post-nitridation. TiN thickness and device rotation. A thinner TiN metal gate is found favorable for improved analog characteristics showing an increase in intrinsic voltage gain. The devices where the high-k material is subjected to a nitridation step indicated a degradation of the Early voltage (V(EA)) values which resulted in a lower voltage gain. The 45 degrees rotated devices have a smaller V(EA) than the standard ones when a HfSiO dielectric is used. However, the higher transconductance of these devices, due to the increased mobility in the (1 0 0) sidewall orientation, compensates this V(EA) degradation of the voltage gain, keeping it nearly equal to the voltage gain values of the standard devices. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimization of the treatment process for residual waters from a brewery operating under the modality of an anaerobic reactor and activated sludge combination was studied in two phases. In the first stage, lasting for six months, the characteristics and parameters of the plant operation were analyzed, wherein a diversion rate of more than 50% to aerobic treatment, the use of two aeration tanks and a high sludge production prevailed. The second stage comprised four months during which the system worked under the proposed operational model, with the aim of improving the treatment: reduction of the diversion rate to 30% and use of only one aeration tank At each stage, TSS, VSS and COD were measured at the entrance and exit of the anaerobic reactor mid the aeration tanks. The results were compared with the corresponding design specifications and the needed conditions were applied to reduce the diversion rate towards the aerobic process through monitoring the volume and concentration of the affluent, while applying the strategic changes in reactor parameters needed to increase its efficiency. A diversion reduction from 53 to 34% was achieved, reducing the sludge discharge generated in the aerobic system from 3670mg TSS/l. with two aeration tanks down to 2947mf TSS/l using one tank keeping the same relation VSS:TSS (0.55) and an efficiency of total removal of 98% in terms of COD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with a procedure for model re-identification of a process in closed loop with ail already existing commercial MPC. The controller considered here has a two-layer structure where the upper layer performs a target calculation based on a simplified steady-state optimization of the process. Here, it is proposed a methodology where a test signal is introduced in a tuning parameter of the target calculation layer. When the outputs are controlled by zones instead of at fixed set points, the approach allows the continuous operation of the process without an excessive disruption of the operating objectives as process constraints and product specifications remain satisfied during the identification test. The application of the method is illustrated through the simulation of two processes of the oil refining industry. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When modeling real-world decision-theoretic planning problems in the Markov Decision Process (MDP) framework, it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, natural uncertainty arises in the transition specification due to elicitation of MOP transition models from an expert or estimation from data, or non-stationary transition distributions arising from insufficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, the Markov Decision Process with Imprecise Transition Probabilities (MDP-IPs) has been introduced to model such scenarios. Unfortunately, while various solution algorithms exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose efficient dynamic programming methods to exploit its structure. Noting that the key computational bottleneck in the solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional ""flat"" dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error of any approximation algorithm evaluated. (C) 2011 Elsevier B.V. All rights reserved.