992 resultados para scenario uncertainty


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we analyze the effect of incorporating life cycle inventory (LCI) uncertainty on the multi-objective optimization of chemical supply chains (SC) considering simultaneously their economic and environmental performance. To this end, we present a stochastic multi-scenario mixed-integer linear programming (MILP) coupled with a two-step transformation scenario generation algorithm with the unique feature of providing scenarios where the LCI random variables are correlated and each one of them has the desired lognormal marginal distribution. The environmental performance is quantified following life cycle assessment (LCA) principles, which are represented in the model formulation through standard algebraic equations. The capabilities of our approach are illustrated through a case study of a petrochemical supply chain. We show that the stochastic solution improves the economic performance of the SC in comparison with the deterministic one at any level of the environmental impact, and moreover the correlation among environmental burdens provides more realistic scenarios for the decision making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Participants in contingent valuation studies may be uncertain about a number of aspects of the policy and survey context. The uncertainty management model of fairness judgments states that individuals will evaluate a policy in terms of its fairness when they do not know whether they can trust the relevant managing authority or experience uncertainty due to insufficient knowledge of the general issues surrounding the environmental policy. Similarly, some researchers have suggested that, not knowing how to answer WTP questions, participants convey their general attitudes toward the public good rather than report well-defined economic preferences. These contentions were investigated in a sample of 840 residents in four urban catchments across Australia who were interviewed about their WTP for stormwater pollution abatement. Four sources of uncertainty were measured: amount of prior issue-related thought, trustworthiness of the water authority, insufficient scenario information, and WTP response uncertainty. A logistic regression model was estimated in each subsample to test the main effects of the uncertainty sources on WTP as well as their interaction with fairness and proenvironmental attitudes. Results indicated support for the uncertainty management model in only one of the four samples. Similarly, proenvironmental attitudes interacted rarely with uncertainty to a significant level, and in ways that were more complex than hypothesised. It was concluded that uncertain individuals were generally not more likely than other participants to draw on either fairness evaluations or proenvironmental attitudes when making decisions about paying for stormwater pollution abatement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a problem structuring methodology to assess real option decisions in the face of unpredictability. Based on principles of robustness analysis and scenario planning, we demonstrate how decision-aiding can facilitate participation in projects setting and achieve effective decision making through the use of real options reasoning. We argue that robustness heuristics developed in earlier studies can be practical proxies for real options performance, hence indicators of efficient flexible planning. The developed framework also highlights how to integrate real options solutions in firms’ strategic plans and operating actions. The use of the methodology in a location decision application is provided for illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate change in the Arctic is predicted to increase plant productivity through decomposition-related enhanced nutrient availability. However, the extent of the increase will depend on whether the increased nutrient availability can be sustained. To address this uncertainty, I assessed the response of plant tissue nutrients, litter decomposition rates, and soil nutrient availability to experimental climate warming manipulations, extended growing season and soil warming, over a 7 year period. Overall, the most consistent effect was the year-to-year variability in measured parameters, probably a result of large differences in weather and time of snowmelt. The results of this study emphasize that although plants of arctic environments are specifically adapted to low nutrient availability, they also posses a suite of traits that help to reduce nutrient losses such as slow growth, low tissue concentrations, and low tissue turnover that result in subtle responses to environmental changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A scenario-based two-stage stochastic programming model for gas production network planning under uncertainty is usually a large-scale nonconvex mixed-integer nonlinear programme (MINLP), which can be efficiently solved to global optimality with nonconvex generalized Benders decomposition (NGBD). This paper is concerned with the parallelization of NGBD to exploit multiple available computing resources. Three parallelization strategies are proposed, namely, naive scenario parallelization, adaptive scenario parallelization, and adaptive scenario and bounding parallelization. Case study of two industrial natural gas production network planning problems shows that, while the NGBD without parallelization is already faster than a state-of-the-art global optimization solver by an order of magnitude, the parallelization can improve the efficiency by several times on computers with multicore processors. The adaptive scenario and bounding parallelization achieves the best overall performance among the three proposed parallelization strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with strategic optimization of a typical industrial chemical supply chain, which involves a material purchase and transportation network, several manufacturing plants with on-site material and product inventories, a product transportation network and several regional markets. In order to address large uncertainties in customer demands at the different regional markets, a novel robust scenario formulation, which has been developed by the authors recently, is tailored and applied for the strategic optimization. Case study results show that the robust scenario formulation works well for this real industrial supply chain system, and it outperforms the deterministic formulation and the classical scenario-based stochastic programming formulation by generating better expected economic performance and solutions that are guaranteed to be feasible for all uncertainty realizations. The robust scenario problem exhibits a decomposable structure that can be taken advantage of by Benders decomposition for efficient solution, so the application of Benders decomposition to the solution of the strategic optimization is also discussed. The case study results show that Benders decomposition can reduce the solution time by almost an order of magnitude when the number of scenarios in the problem is large.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a general multistage stochastic mixed 0-1 problem where the uncertainty appears everywhere in the objective function, constraints matrix and right-hand-side. The uncertainty is represented by a scenario tree that can be a symmetric or a nonsymmetric one. The stochastic model is converted in a mixed 0-1 Deterministic Equivalent Model in compact representation. Due to the difficulty of the problem, the solution offered by the stochastic model has been traditionally obtained by optimizing the objective function expected value (i.e., mean) over the scenarios, usually, along a time horizon. This approach (so named risk neutral) has the inconvenience of providing a solution that ignores the variance of the objective value of the scenarios and, so, the occurrence of scenarios with an objective value below the expected one. Alternatively, we present several approaches for risk averse management, namely, a scenario immunization strategy, the optimization of the well known Value-at-Risk (VaR) and several variants of the Conditional Value-at-Risk strategies, the optimization of the expected mean minus the weighted probability of having a "bad" scenario to occur for the given solution provided by the model, the optimization of the objective function expected value subject to stochastic dominance constraints (SDC) for a set of profiles given by the pairs of threshold objective values and either bounds on the probability of not reaching the thresholds or the expected shortfall over them, and the optimization of a mixture of the VaR and SDC strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reinforcer devaluation paradigm has been regarded as a canonical paradigm to detect habit-like behavior in animal and human instrumental learning. Though less studied, avoidance situations set a scenario where habit-like behavior may be of great experimental and clinical interest. On the other hand, proactive intolerance of uncertainty has been shown as a factor facilitating responses in uncertain situations. Thus, avoidance situations in which uncertainty is favoured, may be taken as a relevant paradigm to examine the role of intolerance of uncertainty as a facilitatory factor for habit-like behavior to occur. In our experiment we used a free-operant discriminative avoidance procedure to implement a devaluation paradigm. Participants learned to avoid an aversive noise presented either to the right or to the left ear by pressing two different keys. After a devaluation phase where the volume of one of the noises was reduced, they went through a test phase identical to the avoidance phase except for the fact that the noise was never administered. Sensitivity to reinforcer devaluation was examined by comparing the response rate to the cue associated to the devalued reinforcer with that to the cue associated to the still aversive reinforcer. The results showed that intolerance of uncertainty was positively associated to insensitivity to reinforcer devaluation. Finally, we discuss the theoretical and clinical implications of the habit-like behavior obtained in our avoidance procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estudio de caso busca identificar los elementos del portafolio de política exterior de Trinidad y Tobago que le permitieron promover exitosamente sus intereses en el Protocolo de Kioto. Al hacer esto, este texto analizará las limitaciones de Trinidad y Tobago en términos de vulnerabilidades de localización, burocracia y recursos. Posteriormente, una revisión del portafolio de política exterior de este Estado ilustrará el uso de estrategias de creación de capacidades y de organización como lo son el contacto con actores institucionales y no gubernamentales, la formación de coaliciones y estrategias argumentativas, entre otras. Finalmente, este artículo concluirá que dichas acciones permitieron la promoción de la agenda de política exterior de Trinidad y Tobago a través de la creación de hojas de ruta y la coordinación de la incertidumbre con el Protocolo de Kioto. Para hacer esto, este trabajo se concentrará en examinar conceptos como vulnerabilidad y priorización, asimismo contrastando diferentes artículos académicos en la materia junto con documentos oficiales de Trinidad y Tobago.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sales growth and employment growth are the two most widely used growth indicators for new ventures; yet, sales growth and employment growth are not interchangeable measures of new venture growth. Rather, they are related, but somewhat independent constructs that respond differently to a variety of criteria. Most of the literature treats this as a methodological technicality. However, sales growth with or without accompanying employment growth has very different implications for managers and policy makers. A better understanding of what drives these different growth metrics has the potential to lead to better decision making. To improve that understanding we apply transaction cost economics reasoning to predict when sales growth will be or will not be accompanied by employment growth. Our results indicate that our predictions are borne out consistently in resource-constrained contexts but not in resource-munificent contexts.