883 resultados para non-additive utility optimization
Resumo:
In this paper I will investigate the conditions under which a convex capacity (or a non-additive probability which exhibts uncertainty aversion) can be represented as a squeeze of a(n) (additive) probability measure associate to an uncertainty aversion function. Then I will present two alternatives forrnulations of the Choquet integral (and I will extend these forrnulations to the Choquet expected utility) in a parametric approach that will enable me to do comparative static exercises over the uncertainty aversion function in an easy way.
Resumo:
„Risikomaße in der Finanzmathematik“ Der Value-at -Risk (VaR) ist ein Risikomaß, dessen Verwendung von der Bankenaufsicht gefordert wird. Der Vorteil des VaR liegt – als Quantil der Ertrags- oder Verlustverteilung - vor allem in seiner einfachen Interpretierbarkeit. Nachteilig ist, dass der linke Rand der Wahrscheinlichkeitsverteilung nicht beachtet wird. Darüber hinaus ist die Berechnung des VaR schwierig, da Quantile nicht additiv sind. Der größte Nachteil des VaR ist in der fehlenden Subadditivität zu sehen. Deswegen werden Alternativen wie Expected Shortfall untersucht. In dieser Arbeit werden zunächst finanzielle Risikomaße eingeführt und einige ihre grundlegenden Eigenschaften festgehalten. Wir beschäftigen uns mit verschiedenen parametrischen und nichtparametrischen Methoden zur Ermittlung des VaR, unter anderen mit ihren Vorteilen und Nachteilen. Des Weiteren beschäftigen wir uns mit parametrischen und nichtparametrischen Schätzern vom VaR in diskreter Zeit. Wir stellen Portfoliooptimierungsprobleme im Black Scholes Modell mit beschränktem VaR und mit beschränkter Varianz vor. Der Vorteil des erstens Ansatzes gegenüber dem zweiten wird hier erläutert. Wir lösen Nutzenoptimierungsprobleme in Bezug auf das Endvermögen mit beschränktem VaR und mit beschränkter Varianz. VaR sagt nichts über den darüber hinausgehenden Verlust aus, während dieser von Expected Shortfall berücksichtigt wird. Deswegen verwenden wir hier den Expected Shortfall anstelle des von Emmer, Korn und Klüppelberg (2001) betrachteten Risikomaßes VaR für die Optimierung des Portfolios im Black Scholes Modell.
Resumo:
We address a portfolio optimization problem in a semi-Markov modulated market. We study both the terminal expected utility optimization on finite time horizon and the risk-sensitive portfolio optimization on finite and infinite time horizon. We obtain optimal portfolios in relevant cases. A numerical procedure is also developed to compute the optimal expected terminal utility for finite horizon problem.
Resumo:
We consider a joint power control and transmission scheduling problem in wireless networks with average power constraints. While the capacity region of a wireless network is convex, a characterization of this region is a hard problem. We formulate a network utility optimization problem involving time-sharing across different "transmission modes," where each mode corresponds to the set of power levels used in the network. The structure of the optimal solution is a time-sharing across a small set of such modes. We use this structure to develop an efficient heuristic approach to finding a suboptimal solution through column generation iterations. This heuristic approach converges quite fast in simulations, and provides a tool for wireless network planning.
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
We discuss solvability issues of ℍ -/ℍ 2/∞ optimal fault detection problems in the most general setting. A solution approach is presented which successively reduces the initial problem to simpler ones. The last computational step generally may involve the solution of a non-standard ℍ -/ ℍ 2/∞ optimization problem for which we discuss possible solution approaches. Using an appropriate definition of the ℍ -- index, we provide a complete solution of this problem in the case of ℍ 2-norm. Furthermore, we discuss the solvability issues in the case of ℍ ∞-norm. © 2011 IEEE.
Resumo:
This article introduces a resource allocation solution capable of handling mixed media applications within the constraints of a 60 GHz wireless network. The challenges of multimedia wireless transmission include high bandwidth requirements, delay intolerance and wireless channel availability. A new Channel Time Allocation Particle Swarm Optimization (CTA-PSO) is proposed to solve the network utility maximization (NUM) resource allocation problem. CTA-PSO optimizes the time allocated to each device in the network in order to maximize the Quality of Service (QoS) experienced by each user. CTA-PSO introduces network-linked swarm size, an increased diversity function and a learning method based on the personal best, Pbest, results of the swarm. These additional developments to the PSO produce improved convergence speed with respect to Adaptive PSO while maintaining the QoS improvement of the NUM. Specifically, CTA-PSO supports applications described by both convex and non-convex utility functions. The multimedia resource allocation solution presented in this article provides a practical solution for real-time wireless networks.
Resumo:
Despite considerable advances in reducing the production of dioxin-like toxicants in recent years, contamination of the food chain still occasionally occurs resulting in huge losses to the agri-food sector and risk to human health through exposure. Dioxin-like toxicity is exhibited by a range of stable and bioaccumulative compounds including polychlorinated dibenzo-p-dioxins (PCDDs) and dibenzofurans (PCDFs), produced by certain types of combustion, and man-made coplanar polychlorinated biphenyls (PCBs), as found in electrical transformer oils. While dioxinergic compounds act by a common mode of action making exposure detection biomarker based techniques a potentially useful tool, the influence of co-contaminating toxicants on such approaches needs to be considered. To assess the impact of possible interactions, the biological responses of H4IIE cells to challenge by 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in combination with PCB-52 and benzo-a-pyrene (BaP) were evaluated by a number of methods in this study. Ethoxyresorufin-O-deethylase (EROD) induction in TCDD exposed cells was suppressed by increasing concentrations of PCB-52, PCB-153, or BaP up to 10 mu M. BaP levels below 1 mu M suppressed TCDD stimulated EROD induction, but at higher concentrations, EROD induction was greater than the maximum observed when cells were treated with TCDD alone. A similar biphasic interaction of BaP with TCDD co-exposure was noted in the AlamarBlue assay and to a lesser extent with PCB-52. Surface enhanced laser desorption/ionization-time of flight mass spectrometry (SELDI-TOF) profiling of peptidomic responses of cells exposed to compound combinations was compared. Cells co-exposed to TCDD in the presence of BaP or PCB-52 produced the most differentiated spectra with a substantial number of non-additive interactions observed. These findings suggest that interactions between dioxin and other toxicants create novel, additive, and non-additive effects, which may be more indicative of the types of responses seen in exposed animals than those of single exposures to the individual compounds.
Resumo:
The integration of wind power in eletricity generation brings new challenges to unit commitment due to the random nature of wind speed. For this particular optimisation problem, wind uncertainty has been handled in practice by means of conservative stochastic scenario-based optimisation models, or through additional operating reserve settings. However, generation companies may have different attitudes towards operating costs, load curtailment, or waste of wind energy, when considering the risk caused by wind power variability. Therefore, alternative and possibly more adequate approaches should be explored. This work is divided in two main parts. Firstly we survey the main formulations presented in the literature for the integration of wind power in the unit commitment problem (UCP) and present an alternative model for the wind-thermal unit commitment. We make use of the utility theory concepts to develop a multi-criteria stochastic model. The objectives considered are the minimisation of costs, load curtailment and waste of wind energy. Those are represented by individual utility functions and aggregated in a single additive utility function. This last function is adequately linearised leading to a mixed-integer linear program (MILP) model that can be tackled by general-purpose solvers in order to find the most preferred solution. In the second part we discuss the integration of pumped-storage hydro (PSH) units in the UCP with large wind penetration. Those units can provide extra flexibility by using wind energy to pump and store water in the form of potential energy that can be generated after during peak load periods. PSH units are added to the first model, yielding a MILP model with wind-hydro-thermal coordination. Results showed that the proposed methodology is able to reflect the risk profiles of decision makers for both models. By including PSH units, the results are significantly improved.
Resumo:
A version of the Canadian Middle Atmosphere Model (CMAM) that is nudged toward reanalysis data up to 1 hPa is used to examine the impacts of parameterized orographic and non-orographic gravity wave drag (OGWD and NGWD) on the zonal-mean circulation of the mesosphere during the extended northern winters of 2006 and 2009 when there were two large stratospheric sudden warmings. The simulations are compared to Aura Microwave Limb Sounder (MLS) observations of mesospheric temperature, carbon monoxide (CO) and derived zonal winds. The control simulation, which uses both OGWD and NGWD, is shown to be in good agreement with MLS. The impacts of OGWD and NGWD are assessed using simulations in which those sources of wave drag are removed. In the absence of OGWD the mesospheric zonal winds in the months preceding the warmings are too strong, causing increased mesospheric NGWD, which drives excessive downwelling, resulting in overly large lower mesospheric values of CO prior to the warming. NGWD is found to be most important following the warmings when the underlying westerlies are too weak to allow much vertical propagation of the orographic gravity waves to the mesosphere. NGWD is primarily responsible for driving the circulation that results in the descent of CO from the thermosphere following the warmings. Zonal mean mesospheric winds and temperatures in all simulations are shown to be strongly constrained by (i.e. slaved to) the stratosphere. Finally, it is demonstrated that the responses to OGWD and NGWD are non-additive due to their dependence and influence on the background winds and temperatures.
Resumo:
In this paper we apply the theory of declsion making with expected utility and non-additive priors to the choice of optimal portfolio. This theory describes the behavior of a rational agent who i5 averse to pure 'uncertainty' (as well as, possibly, to 'risk'). We study the agent's optimal allocation of wealth between a safe and an uncertain asset. We show that there is a range of prices at which the agent neither buys not sells short the uncertain asset. In contrast the standard theory of expected utility predicts that there is exactly one such price. We also provide a definition of an increase in uncertainty aversion and show that it causes the range of prices to increase.
Resumo:
The most widely used updating rule for non-additive probalities is the Dempster-Schafer rule. Schmeidles and Gilboa have developed a model of decision making under uncertainty based on non-additive probabilities, and in their paper “Updating Ambiguos Beliefs” they justify the Dempster-Schafer rule based on a maximum likelihood procedure. This note shows in the context of Schmeidler-Gilboa preferences under uncertainty, that the Dempster-Schafer rule is in general not ex-ante optimal. This contrasts with Brown’s result that Bayes’ rule is ex-ante optimal for standard Savage preferences with additive probabilities.
Resumo:
The non-technical loss is not a problem with trivial solution or regional character and its minimization represents the guarantee of investments in product quality and maintenance of power systems, introduced by a competitive environment after the period of privatization in the national scene. In this paper, we show how to improve the training phase of a neural network-based classifier using a recently proposed meta-heuristic technique called Charged System Search, which is based on the interactions between electrically charged particles. The experiments were carried out in the context of non-technical loss in power distribution systems in a dataset obtained from a Brazilian electrical power company, and have demonstrated the robustness of the proposed technique against with several others nature-inspired optimization techniques for training neural networks. Thus, it is possible to improve some applications on Smart Grids. © 2013 IEEE.
Resumo:
The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567), yearling weight (n=58,124), and scrotal circumference (n=20,371) of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF) and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM) included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R) included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.
Resumo:
Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.