370 resultados para Optimality
Resumo:
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Resumo:
In a decentralized setting the game-theoretical predictions are that only strong blockings are allowed to rupture the structure of a matching. This paper argues that, under indifferences, also weak blockings should be considered when these blockings come from the grand coalition. This solution concept requires stability plus Pareto optimality. A characterization of the set of Pareto-stable matchings for the roommate and the marriage models is provided in terms of individually rational matchings whose blocking pairs, if any, are formed with unmatched agents. These matchings always exist and give an economic intuition on how blocking can be done by non-trading agents, so that the transactions need not be undone as agents reach the set of stable matchings. Some properties of the Pareto-stable matchings shared by the Marriage and Roommate models are obtained.
Resumo:
An analysis of the relationships of the major arthropod groups Was undertaken using mitochondrial genome data to examine the hypotheses that Hexapoda is polyphyletic and that Collembola is more closely related to branchiopod crustaceans than insects. We sought to examine the sensitivity of this relationship to outgroup choice, data treatment. gene choice and optimality criteria used in the phylogenetic analysis of mitochondrial genome data. Additionally we sequenced the mitochondrial genome of ail archaeognathan, Nesomachilis australica. to improve taxon selection in the apterygote insects, a group poorly represented in previous mitochondrial phylogenies. The sister group of the Collembola was rarely resolved in our analyses with a significant level of support. The use of different outgroups (myriapods, nematodes, or annelids + mollusks) resulted in many different placements of Collembola. The way in which the dataset was coded for analysis (DNA, DNA with the exclusion of third codon position and as amino acids) also had marked affects on tree topology. We found that nodal Support was spread evenly throughout the 13 mitochondrial genes and the exclusion of genes resulted in significantly less resolution in the inferred trees. Optimality criteria had a much lesser effect on topology than the preceding factors; parsimony and Bayesian trees for a given data set and treatment were quite similar. We therefore conclude that the relationships of the extant arthropod groups as inferred by mitochondrial genomes are highly vulnerable to outgroup choice, data treatment and gene choice, and no consistent alternative hypothesis of Collembola's relationships is supported. Pending the resolution of these identified problems with the application of mitogenomic data to basal arthropod relationships, it is difficult to justify the rejection of hexapod monophyly, which is well supported on morphological grounds. (c) The Willi Hennig Society 2004.
Resumo:
Biogeography deals with the combined analysis of the spatial and temporal components of the evolutionary process. To this purpose, biogeographical analysis should consider two extra steps: a reciprocal illumination step, and a consilience step. Even if the traditional challenges of biogeography were successfully handled, the obtained hypothesis is not necessarily meaningful in biogeographical terms--it needs continuous test in the light of external hypotheses. For this reason, a concept analogous to Hennig`s reciprocal illumination is valuable, as well as a sort of biogeographical consilience in Whewell`s sense. Firstly, through the search for different classes of evidence, information useful to improve the hypothesis can be accessed via reciprocal illumination. Following, a more general hypothesis would arise through a consilience process, when the hypothesis explains phenomena not contemplated during its construction, as the distribution of other taxa or the existence (or absence) of fossils. This procedure aims to evaluate the robustness of biogeographical hypotheses as scientific theories. Such theories are reliable descriptions of how life changes its form both in space and time, putting historical biogeography close to Croizat`s statement of evolution as a three dimensional phenomenon.
Resumo:
We explore the task of optimal quantum channel identification and in particular, the estimation of a general one-parameter quantum process. We derive new characterizations of optimality and apply the results to several examples including the qubit depolarizing channel and the harmonic oscillator damping channel. We also discuss the geometry of the problem and illustrate the usefulness of using entanglement in process estimation.
Resumo:
An operational space map is an efficient tool to compare a large number of operational strategies to find an optimal choice of setpoints based on a multicriterion. Typically, such a multicriterion includes a weighted sum of cost of operation and effluent quality. Due to the relative high cost of aeration such a definition of optimality result in a relatively high fraction of the effluent total nitrogen in the form of ammonium. Such a strategy may however introduce a risk into operation because a low degree of ammonium removal leads to a low amount of nitrifiers. This in turn leads to a reduced ability to reject event disturbances, such as large variations in the ammonium load, drop in temperature, the presence of toxic/inhibitory compounds in the influent etc. Hedging is a risk minimisation tool, with the aim to "reduce one's risk of loss on a bet or speculation by compensating transactions on the other side" (The Concise Oxford Dictionary (1995)). In wastewater treatment plant operation hedging can be applied by choosing a higher level of ammonium removal to increase the amount of nitrifiers. This is a sensible way to introduce disturbance rejection ability into the multi criterion. In practice, this is done by deciding upon an internal effluent ammonium criterion. In some countries such as Germany, a separate criterion already applies to the level of ammonium in the effluent. However, in most countries the effluent criterion applies to total nitrogen only. In these cases, an internal effluent ammonium criterion should be selected in order to secure proper disturbance rejection ability.
Resumo:
Mathematical Program with Complementarity Constraints (MPCC) finds applica- tion in many fields. As the complementarity constraints fail the standard Linear In- dependence Constraint Qualification (LICQ) or the Mangasarian-Fromovitz constraint qualification (MFCQ), at any feasible point, the nonlinear programming theory may not be directly applied to MPCC. However, the MPCC can be reformulated as NLP problem and solved by nonlinear programming techniques. One of them, the Inexact Restoration (IR) approach, performs two independent phases in each iteration - the feasibility and the optimality phases. This work presents two versions of an IR algorithm to solve MPCC. In the feasibility phase two strategies were implemented, depending on the constraints features. One gives more importance to the complementarity constraints, while the other considers the priority of equality and inequality constraints neglecting the complementarity ones. The optimality phase uses the same approach for both algorithm versions. The algorithms were implemented in MATLAB and the test problems are from MACMPEC collection.
Resumo:
Multi-objective particle swarm optimization (MOPSO) is a search algorithm based on social behavior. Most of the existing multi-objective particle swarm optimization schemes are based on Pareto optimality and aim to obtain a representative non-dominated Pareto front for a given problem. Several approaches have been proposed to study the convergence and performance of the algorithm, particularly by accessing the final results. In the present paper, a different approach is proposed, by using Shannon entropy to analyzethe MOPSO dynamics along the algorithm execution. The results indicate that Shannon entropy can be used as an indicator of diversity and convergence for MOPSO problems.
Resumo:
This paper presents a complete, quadratic programming formulation of the standard thermal unit commitment problem in power generation planning, together with a novel iterative optimisation algorithm for its solution. The algorithm, based on a mixed-integer formulation of the problem, considers piecewise linear approximations of the quadratic fuel cost function that are dynamically updated in an iterative way, converging to the optimum; this avoids the requirement of resorting to quadratic programming, making the solution process much quicker. From extensive computational tests on a broad set of benchmark instances of this problem, the algorithm was found to be flexible and capable of easily incorporating different problem constraints. Indeed, it is able to tackle ramp constraints, which although very important in practice were rarely considered in previous publications. Most importantly, optimal solutions were obtained for several well-known benchmark instances, including instances of practical relevance, that are not yet known to have been solved to optimality. Computational experiments and their results showed that the method proposed is both simple and extremely effective.
Resumo:
The main objective of this work was to investigate the application of experimental design techniques for the identification of Michaelis-Menten kinetic parameters. More specifically, this study attempts to elucidate the relative advantages/disadvantages of employing complex experimental design techniques in relation to equidistant sampling when applied to different reactor operation modes. All studies were supported by simulation data of a generic enzymatic process that obeys to the Michaelis-Menten kinetic equation. Different aspects were investigated, such as the influence of the reactor operation mode (batch, fed-batch with pulse wise feeding and fed-batch with continuous feeding) and the experimental design optimality criteria on the effectiveness of kinetic parameters identification. The following experimental design optimality criteria were investigated: 1) minimization of the sum of the diagonal of the Fisher information matrix (FIM) inverse (A-criterion), 2) maximization of the determinant of the FIM (D-criterion), 3) maximization of the smallest eigenvalue of the FIM (E-criterion) and 4) minimization of the quotient between the largest and the smallest eigenvalue (modified E-criterion). The comparison and assessment of the different methodologies was made on the basis of the Cramér-Rao lower bounds (CRLB) error in respect to the parameters vmax and Km of the Michaelis-Menten kinetic equation. In what concerns the reactor operation mode, it was concluded that fed-batch (pulses) is better than batch operation for parameter identification. When the former operation mode is adopted, the vmax CRLB error is lowered by 18.6 % while the Km CRLB error is lowered by 26.4 % when compared to the batch operation mode. Regarding the optimality criteria, the best method was the A-criterion, with an average vmax CRLB of 6.34 % and 5.27 %, for batch and fed-batch (pulses), respectively, while presenting a Km’s CRLB of 25.1 % and 18.1 %, for batch and fed-batch (pulses), respectively. As a general conclusion of the present study, it can be stated that experimental design is justified if the starting parameters CRLB errors are inferior to 19.5 % (vmax) and 45% (Km), for batch processes, and inferior to 42 % and to 50% for fed-batch (pulses) process. Otherwise equidistant sampling is a more rational decision. This conclusion clearly supports that, for fed-batch operation, the use of experimental design is likely to largely improve the identification of Michaelis-Menten kinetic parameters.
Resumo:
This paper proposes a new strategy to integrate shared resources and precedence constraints among real-time tasks, assuming no precise information on critical sections and computation times is available. The concept of bandwidth inheritance is combined with a capacity sharing and stealing mechanism to efficiently exchange bandwidth among tasks to minimise the degree of deviation from the ideal system’s behaviour caused by inter-application blocking. The proposed Capacity Exchange Protocol (CXP) is simpler than other proposed solutions for sharing resources in open real-time systems since it does not attempt to return the inherited capacity in the same exact amount to blocked servers. This loss of optimality is worth the reduced complexity as the protocol’s behaviour nevertheless tends to be fair and outperforms the previous solutions in highly dynamic scenarios as demonstrated by extensive simulations. A formal analysis of CXP is presented and the conditions under which it is possible to guarantee hard real-time tasks are discussed.
Resumo:
This paper proposes a new strategy to integrate shared resources and precedence constraints among real-time tasks, assuming no precise information on critical sections and computation times is available. The concept of bandwidth inheritance is combined with a greedy capacity sharing and stealing policy to efficiently exchange bandwidth among tasks, minimising the degree of deviation from the ideal system's behaviour caused by inter-application blocking. The proposed capacity exchange protocol (CXP) focus on exchanging extra capacities as early, and not necessarily as fairly, as possible. This loss of optimality is worth the reduced complexity as the protocol's behaviour nevertheless tends to be fair in the long run and outperforms other solutions in highly dynamic scenarios, as demonstrated by extensive simulations.
Resumo:
We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markov-switching jump–diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton–Jacobi–Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumption– investment problem for a jump–diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática