370 resultados para Optimality
Resumo:
This article presents and discusses necessary conditions of optimality for infinite horizon dynamic optimization problems with inequality state constraints and set inclusion constraints at both endpoints of the trajectory. The cost functional depends on the state variable at the final time, and the dynamics are given by a differential inclusion. Moreover, the optimization is carried out over asymptotically convergent state trajectories. The novelty of the proposed optimality conditions for this class of problems is that the boundary condition of the adjoint variable is given as a weak directional inclusion at infinity. This improves on the currently available necessary conditions of optimality for infinite horizon problems. © 2011 IEEE.
Resumo:
This work considers nonsmooth optimal control problems and provides two new sufficient conditions of optimality. The first condition involves the Lagrange multipliers while the second does not. We show that under the first new condition all processes satisfying the Pontryagin Maximum Principle (called MP-processes) are optimal. Conversely, we prove that optimal control problems in which every MP-process is optimal necessarily obey our first optimality condition. The second condition is more natural, but it is only applicable to normal problems and the converse holds just for smooth problems. Nevertheless, it is proved that for the class of normal smooth optimal control problems the two conditions are equivalent. Some examples illustrating the features of these sufficient concepts are presented. © 2012 Springer Science+Business Media New York.
Resumo:
Este trabalho pretende mostrar que a Teoria da Otimidade proporciona novas formas para explicar mudanças de som que não a re-ordenação no ranqueamento deconstraints. Ele examina os aspectos diacrônicos de harmonia nasal na família Mundurukú, tronco Tupi. A comparação entre os sistemas modernos de Mundurukú e Kuruaya salienta que o sistema original, Proto-Mundurukú, tem propriedades semelhantes às atualmente observadas em Kuruaya. Em especial, os alvos do espalhamento de nasalidadeincluiamoclusivas sonoras e soantes, enquanto que as obstruintes surdas eram transparentes. Esse sistema evoluiu para outro em Pré-Munduruku, quando novos contrastes foram introduzidos na língua, transformando obstruintes em segmentos opacos e, portanto, bloqueando a nasalização. A análise, formalizada dentro da Teoria da Otimidade, demonstra que não houve uma re-ordenação dos constraints harmônicos; eles apenas se tornaram mais restritos, como mostra a cronologia relativa que deu origem ao sistema moderno de Mundurukú. Além disso, o estudo discute também as consequências dessa mudança para a gramática sincrônica, e como isso explica as irregularidades do processo.
Resumo:
This article deals with a vector optimization problem with cone constraints in a Banach space setting. By making use of a real-valued Lagrangian and the concept of generalized subconvex-like functions, weakly efficient solutions are characterized through saddle point type conditions. The results, jointly with the notion of generalized Hessian (introduced in [Cominetti, R., Correa, R.: A generalized second-order derivative in nonsmooth optimization. SIAM J. Control Optim. 28, 789–809 (1990)]), are applied to achieve second order necessary and sufficient optimality conditions (without requiring twice differentiability for the objective and constraining functions) for the particular case when the functionals involved are defined on a general Banach space into finite dimensional ones.
Resumo:
In this thesis we address a collection of Network Design problems which are strongly motivated by applications from Telecommunications, Logistics and Bioinformatics. In most cases we justify the need of taking into account uncertainty in some of the problem parameters, and different Robust optimization models are used to hedge against it. Mixed integer linear programming formulations along with sophisticated algorithmic frameworks are designed, implemented and rigorously assessed for the majority of the studied problems. The obtained results yield the following observations: (i) relevant real problems can be effectively represented as (discrete) optimization problems within the framework of network design; (ii) uncertainty can be appropriately incorporated into the decision process if a suitable robust optimization model is considered; (iii) optimal, or nearly optimal, solutions can be obtained for large instances if a tailored algorithm, that exploits the structure of the problem, is designed; (iv) a systematic and rigorous experimental analysis allows to understand both, the characteristics of the obtained (robust) solutions and the behavior of the proposed algorithm.
Resumo:
Abstract We consider a wide class of models that includes the highly reliable Markovian systems (HRMS) often used to represent the evolution of multi-component systems in reliability settings. Repair times and component lifetimes are random variables that follow a general distribution, and the repair service adopts a priority repair rule based on system failure risk. Since crude simulation has proved to be inefficient for highly-dependable systems, the RESTART method is used for the estimation of steady-state unavailability and other reliability measures. In this method, a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of a rare event (e.g., a system failure) is higher. The main difficulty involved in applying this method is finding a suitable function, called the importance function, to define the regions. In this paper we introduce an importance function which, for unbalanced systems, represents a great improvement over the importance function used in previous papers. We also demonstrate the asymptotic optimality of RESTART estimators in these models. Several examples are presented to show the effectiveness of the new approach, and probabilities up to the order of 10-42 are accurately estimated with little computational effort.
Resumo:
In the present paper, the endogenous theory of time preference is extended to analyze those processes of capital accumulation and changes in environmental quality that are dynamically optimum with respect to the intertemporal preference ordering of the representative individual of the society in question. The analysis is carried out within the conceptual framework of the dynamic analysis of environmental quality, as has been developed by a number of economists for specific cases of the fisheries and forestry commons. The duality principles on intertemporal preference ordering and capital accumulation are extended to the situation where processes of capital accumulation are subject to the Penrose effect, which exhibit the marginal decrease in the effect of investment in private and social overhead capital upon the rate at which capital is accumulated. The dynamically optimum time-path of economic activities is characterized by the proportionality of two systems of imputed, or efficient, prices, one associated with the given intertemporal ordering and another associated with processes of accumulation of private and social overhead capital. It is particularly shown that the dynamically optimality of the processes of capital accumulation involving both private and social overhead capital is characterized by the conditions that are identical with those involving private capital, with the role of social overhead capital only indirectly exhibited.
Resumo:
In our daily lives, we often must predict how well we are going to perform in the future based on an evaluation of our current performance and an assessment of how much we will improve with practice. Such predictions can be used to decide whether to invest our time and energy in learning and, if we opt to invest, what rewards we may gain. This thesis investigated whether people are capable of tracking their own learning (i.e. current and future motor ability) and exploiting that information to make decisions related to task reward. In experiment one, participants performed a target aiming task under a visuomotor rotation such that they initially missed the target but gradually improved. After briefly practicing the task, they were asked to select rewards for hits and misses applied to subsequent performance in the task, where selecting a higher reward for hits came at a cost of receiving a lower reward for misses. We found that participants made decisions that were in the direction of optimal and therefore demonstrated knowledge of future task performance. In experiment two, participants learned a novel target aiming task in which they were rewarded for target hits. Every five trials, they could choose a target size which varied inversely with reward value. Although participants’ decisions deviated from optimal, a model suggested that they took into account both past performance, and predicted future performance, when making their decisions. Together, these experiments suggest that people are capable of tracking their own learning and using that information to make sensible decisions related to reward maximization.
Resumo:
Translation of 2 articles from the Russian journal.
Resumo:
The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.
Resumo:
* This work was completed while the author was visiting the University of Limoges. Support from the laboratoire “Analyse non-linéaire et Optimisation” is gratefully acknowledged.
Resumo:
2000 Mathematics Subject Classification: Primary 90C29; Secondary 90C30.
Resumo:
2000 Mathematics Subject Classification: Primary 90C29; Secondary 49K30.
Resumo:
AMS subject classification: 49J52, 90C30.