17 resultados para stochastic dominance constraints
Resumo:
In this work we extend to the multistage case two recent risk averse measures for two-stage stochastic programs based on first- and second-order stochastic dominance constraints induced by mixed-integer linear recourse. Additionally, we consider Time Stochastic Dominance (TSD) along a given horizon. Given the dimensions of medium-sized problems augmented by the new variables and constraints required by those risk measures, it is unrealistic to solve the problem up to optimality by plain use of MIP solvers in a reasonable computing time, at least. Instead of it, decomposition algorithms of some type should be used. We present an extension of our Branch-and-Fix Coordination algorithm, so named BFC-TSD, where a special treatment is given to cross scenario group constraints that link variables from different scenario groups. A broad computational experience is presented by comparing the risk neutral approach and the tested risk averse strategies. The performance of the new version of the BFC algorithm versus the plain use of a state-of-the-artMIP solver is also reported.
Resumo:
We present a general multistage stochastic mixed 0-1 problem where the uncertainty appears everywhere in the objective function, constraints matrix and right-hand-side. The uncertainty is represented by a scenario tree that can be a symmetric or a nonsymmetric one. The stochastic model is converted in a mixed 0-1 Deterministic Equivalent Model in compact representation. Due to the difficulty of the problem, the solution offered by the stochastic model has been traditionally obtained by optimizing the objective function expected value (i.e., mean) over the scenarios, usually, along a time horizon. This approach (so named risk neutral) has the inconvenience of providing a solution that ignores the variance of the objective value of the scenarios and, so, the occurrence of scenarios with an objective value below the expected one. Alternatively, we present several approaches for risk averse management, namely, a scenario immunization strategy, the optimization of the well known Value-at-Risk (VaR) and several variants of the Conditional Value-at-Risk strategies, the optimization of the expected mean minus the weighted probability of having a "bad" scenario to occur for the given solution provided by the model, the optimization of the objective function expected value subject to stochastic dominance constraints (SDC) for a set of profiles given by the pairs of threshold objective values and either bounds on the probability of not reaching the thresholds or the expected shortfall over them, and the optimization of a mixture of the VaR and SDC strategies.
Resumo:
In this paper we introduce four scenario Cluster based Lagrangian Decomposition (CLD) procedures for obtaining strong lower bounds to the (optimal) solution value of two-stage stochastic mixed 0-1 problems. At each iteration of the Lagrangian based procedures, the traditional aim consists of obtaining the solution value of the corresponding Lagrangian dual via solving scenario submodels once the nonanticipativity constraints have been dualized. Instead of considering a splitting variable representation over the set of scenarios, we propose to decompose the model into a set of scenario clusters. We compare the computational performance of the four Lagrange multiplier updating procedures, namely the Subgradient Method, the Volume Algorithm, the Progressive Hedging Algorithm and the Dynamic Constrained Cutting Plane scheme for different numbers of scenario clusters and different dimensions of the original problem. Our computational experience shows that the CLD bound and its computational effort depend on the number of scenario clusters to consider. In any case, our results show that the CLD procedures outperform the traditional LD scheme for single scenarios both in the quality of the bounds and computational effort. All the procedures have been implemented in a C++ experimental code. A broad computational experience is reported on a test of randomly generated instances by using the MIP solvers COIN-OR and CPLEX for the auxiliary mixed 0-1 cluster submodels, this last solver within the open source engine COIN-OR. We also give computational evidence of the model tightening effect that the preprocessing techniques, cut generation and appending and parallel computing tools have in stochastic integer optimization. Finally, we have observed that the plain use of both solvers does not provide the optimal solution of the instances included in the testbed with which we have experimented but for two toy instances in affordable elapsed time. On the other hand the proposed procedures provide strong lower bounds (or the same solution value) in a considerably shorter elapsed time for the quasi-optimal solution obtained by other means for the original stochastic problem.
Resumo:
We present a scheme to generate clusters submodels with stage ordering from a (symmetric or a nonsymmetric one) multistage stochastic mixed integer optimization model using break stage. We consider a stochastic model in compact representation and MPS format with a known scenario tree. The cluster submodels are built by storing first the 0-1 the variables, stage by stage, and then the continuous ones, also stage by stage. A C++ experimental code has been implemented for reordering the stochastic model as well as the cluster decomposition after the relaxation of the non-anticipativiy constraints until the so-called breakstage. The computational experience shows better performance of the stage ordering in terms of elapsed time in a randomly generated testbed of multistage stochastic mixed integer problems.
Resumo:
We extend the classic Merton (1969, 1971) problem that investigates the joint consumption-savings and portfolio-selection problem under capital risk by assuming sophisticated but time-inconsistent agents. We introduce stochastic hyperbolic preferences as in Harris and Laibson (2013) and find closed-form solutions for Merton's optimal consumption and portfolio selection problem in continuous time. We find that the portfolio rule remains identical to the time-consistent solution with power utility and no borrowing constraints. However,the marginal propensity to consume out of wealth is unambiguously greater than the time-consistent, exponential case and,importantly, it is also more responsive to changes in risk. These results suggest that hyperbolic discounting with sophisticated agents offers promise for contributing to explaining important aspects of asset market data.
Resumo:
Plant community ecologists use the null model approach to infer assembly processes from observed patterns of species co-occurrence. In about a third of published studies, the null hypothesis of random assembly cannot be rejected. When this occurs, plant ecologists interpret that the observed random pattern is not environmentally constrained - but probably generated by stochastic processes. The null model approach (using the C-score and the discrepancy index) was used to test for random assembly under two simulation algorithms. Logistic regression, distance-based redundancy analysis, and constrained ordination were used to test for environmental determinism (species segregation along environmental gradients or turnover and species aggregation). This article introduces an environmentally determined community of alpine hydrophytes that presents itself as randomly assembled. The pathway through which the random pattern arises in this community is suggested to be as follows: Two simultaneous environmental processes, one leading to species aggregation and the other leading to species segregation, concurrently generate the observed pattern, which results to be neither aggregated nor segregated - but random. A simulation study supports this suggestion. Although apparently simple, the null model approach seems to assume that a single ecological factor prevails or that if several factors decisively influence the community, then they all exert their influence in the same direction, generating either aggregation or segregation. As these assumptions are unlikely to hold in most cases and assembly processes cannot be inferred from random patterns, we would like to propose plant ecologists to investigate specifically the ecological processes responsible for observed random patterns, instead of trying to infer processes from patterns
Resumo:
The aim of this paper is to explain under which circumstances using TACs as instrument to manage a fishery along with fishing periods may be interesting from a regulatory point of view. In order to do this, the deterministic analysis of Homans and Wilen (1997)and Anderson (2000) is extended to a stochastic scenario where the resource cannot be measured accurately. The resulting endogenous stochastic model is numerically solved for finding the optimal control rules in the Iberian sardine stock. Three relevant conclusions can be highligted from simulations. First, the higher the uncertainty about the state of the stock is, the lower the probability of closing the fishery is. Second, the use of TACs as management instrument in fisheries already regulated with fishing periods leads to: i) An increase of the optimal season length and harvests, especially for medium and high number of licences, ii) An improvement of the biological and economic variables when the size of the fleet is large; and iii) Eliminate the extinction risk for the resource. And third, the regulator would rather select the number of licences and do not restrict the season length.
Resumo:
The purpose of this article is to characterize dynamic optimal harvesting trajectories that maximize discounted utility assuming an age-structured population model, in the same line as Tahvonen (2009). The main novelty of our study is that uses as an age-structured population model the standard stochastic cohort framework applied in Virtual Population Analysis for fish stock assessment. This allows us to compare optimal harvesting in a discounted economic context with standard reference points used by fisheries agencies for long term management plans (e.g. Fmsy). Our main findings are the following. First, optimal steady state is characterized and sufficient conditions that guarantees its existence and uniqueness for the general case of n cohorts are shown. It is also proved that the optimal steady state coincides with the traditional target Fmsy when the utility function to be maximized is the yield and the discount rate is zero. Second, an algorithm to calculate the optimal path that easily drives the resource to the steady state is developed. And third, the algorithm is applied to the Northern Stock of hake. Results show that management plans based exclusively on traditional reference targets as Fmsy may drive fishery economic results far from the optimal.
Resumo:
This paper studies the behavior of the implied volatility function (smile) when the true distribution of the underlying asset is consistent with the stochastic volatility model proposed by Heston (1993). The main result of the paper is to extend previous results applicable to the smile as a whole to alternative degrees of moneyness. The conditions under which the implied volatility function changes whenever there is a change in the parameters associated with Hestons stochastic volatility model for a given degree of moneyness are given.
Resumo:
In this article, we analyze how to evaluate fishery resource management under “ecological uncertainty”. In this context, an efficient policy consists of applying a different exploitation rule depending on the state of the resource and we could say that the stock is always in transition, jumping from one steady state to another. First, we propose a method for calibrating the growth path of the resource such that observed dynamics of resource and captures are matched. Second, we apply the calibration procedure proposed in two different fishing grounds: the European Anchovy (Division VIII) and the Southern Stock of Hake. Our results show that the role played by uncertainty is essential for the conclusions. For European Anchovy fishery (Division VIII) we find, in contrast with Del Valle et al. (2001), that this is not an overexploited fishing ground. However, we show that the Southern Stock of Hake is in a dangerous situation. In both cases our results are in accordance with ICES advice.
Resumo:
This paper analyzes auctions where bidders face nancial constraints that may force them to resell part of the property of the good (or subcontract part of a project) at a resale market. First we show that the ine¢ cient speculative equilibria of second- price auctions (Garratt and Tröger, 2006) generalizes to situations with partial resale where only the high value bidder is nancially constrained. However, when all players face nancial constraints the ine¢ cient speculative equilibria disappear. Therefore, for auctioning big facilities or contracts where all bidders are nancially constrained and there is a resale market, the second price auction remains a simple and appropriate mechanism to achieve an e¢ cient allocation.
Resumo:
This paper deals with the economics of gasification facilities in general and IGCC power plants in particular. Regarding the prospects of these systems, passing the technological test is one thing, passing the economic test can be quite another. In this respect, traditional valuations assume constant input and/or output prices. Since this is hardly realistic, we allow for uncertainty in prices. We naturally look at the markets where many of the products involved are regularly traded. Futures markets on commodities are particularly useful for valuing uncertain future cash flows. Thus, revenues and variable costs can be assessed by means of sound financial concepts and actual market data. On the other hand, these complex systems provide a number of flexibility options (e.g., to choose among several inputs, outputs, modes of operation, etc.). Typically, flexibility contributes significantly to the overall value of real assets. Indeed, maximization of the asset value requires the optimal exercise of any flexibility option available. Yet the economic value of flexibility is elusive, the more so under (price) uncertainty. And the right choice of input fuels and/or output products is a main concern for the facility managers. As a particular application, we deal with the valuation of input flexibility. We follow the Real Options approach. In addition to economic variables, we also address technical and environmental issues such as energy efficiency, utility performance characteristics and emissions (note that carbon constraints are looming). Lastly, a brief introduction to some stochastic processes suitable for valuation purposes is provided.
Resumo:
30 p.
Resumo:
167 p.