985 resultados para Optimal unit commitment
Resumo:
The integration of large amounts of wind energy in power systems raises important operation issues such as the balance between power demand and generation. The pumped storage hydro (PSH) units are seen as one solution for this issue, avoiding the need for wind power curtailments. However, the behavior of a PSH unit might differ considerably when it operates in a liberalized market with some degree of market power. In this regard, a new approach for the optimal daily scheduling of a PSH unit in the day-ahead electricity market was developed and presented in this paper, in which the market power is modeled by a residual inverse demand function with a variable elasticity. The results obtained show that increasing degrees of market power of the PSH unit correspond to decreasing levels of storage and, therefore, the capacity to integrate wind power is considerably reduced under these circumstances.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed.
Resumo:
This paper examines the optimal design of climate change policies in the context where governments want to encourage the private sector to undertake significant immediate investment in developing cleaner technologies, but the carbon taxes and other environmental policies that could in principle stimulate such investment will be imposed over a very long future. The conventional claim by environmental economists is that environmental policies alone are sufficient to induce firms to undertake optimal investment. However this argument requires governments to be able to commit to these future taxes, and it is far from clear that governments have this degree of commitment. We assume instead that governments cannot commit, and so both they and the private sector have to contemplate the possibility of there being governments in power in the future that give different (relative) weights to the environment. We show that this lack of commitment has a significant asymmetric effect. Compared to the situation where governments can commit it increases the incentive of the current government to have the investment undertaken, but reduces the incentive of the private sector to invest. Consequently governments may need to use additional policy instruments – such as R&D subsidies – to stimulate the required investment.
Resumo:
This paper investigates the conduct of monetary and fiscal policy in the post-ERM period in the UK. Using a simple DSGE New Keynesian model of non-cooperative monetary and fiscal policy interactions under fiscal intra-period leadership, we demonstrate that the past policy in the UK is better explained by optimal policy under discretion than under commitment. We estimate policy objectives of both policy makers. We demonstrate that fiscal policy plays an important role in identifying the monetary policy regime.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
A multiple-partners assignment game with heterogeneous sales and multiunit demands consists of a set of sellers that own a given number of indivisible units of (potentially many different) goods and a set of buyers who value those units and want to buy at most an exogenously fixed number of units. We define a competitive equilibrium for this generalized assignment game and prove its existence by using only linear programming. In particular, we show how to compute equilibrium price vectors from the solutions of the dual linear program associated to the primal linear program defined to find optimal assignments. Using only linear programming tools, we also show (i) that the set of competitive equilibria (pairs of price vectors and assignments) has a Cartesian product structure: each equilibrium price vector is part of a competitive equilibrium with all optimal assignments, and vice versa; (ii) that the set of (restricted) equilibrium price vectors has a natural lattice structure; and (iii) how this structure is translated into the set of agents' utilities that are attainable at equilibrium.
Resumo:
This paper derives a model of markets with system goods and two technological standards. An established standard incurs lower unit production costs but causes a negative externality. The paper derives the conditions for policy intervention and compares the effect of direct and indirect cost-reducing subsidies in two markets with system goods in the presence of externalities. If consumers are committed to the technology by purchasing one of the components, direct subsidies are preferable. For a medium-low cost difference between technological standards and a low externality cost it is optimal to provide a direct subsidy only to the first technology adopter. As the higher the externality cost raises, the more technology adopters should be provided with direct subsidies. This effect is robust in all extensions. In the absence of consumers commitment to a technological standard indirect and direct subsidies are both desirable. In this case, the subsidy to the first adopter is lower then the subsidy to the second adopter. Moreover, for the low cost difference between technological standards and low externality cost the fi rst fi rm chooses a superior standard without policy intervention. Finally, a perfect compatibility between components based on different technological standards enhances an advantage of indirect subsidies for medium-high externality cost and cost difference between technological standards. Journal of Economic Literature Classi fication Numbers: C72, D21, D40, H23, L13, L22, L51, O25, O33, O38. Keywords: Technological standards; complementary products; externalities; cost-reducing subsidies; compatibility.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
To recover a version of Barro's (1979) `random walk'tax smoothing outcome, we modify Lucas and Stokey's (1983) economyto permit only risk--free debt. This imparts near unit root like behaviorto government debt, independently of the government expenditureprocess, a realistic outcome in the spirit of Barro's. We showhow the risk--free--debt--only economy confronts the Ramsey plannerwith additional constraints on equilibrium allocations thattake the form of a sequence of measurability conditions.We solve the Ramsey problem by formulating it in terms of a Lagrangian,and applying a Parameterized Expectations Algorithm tothe associated first--order conditions. The first--order conditions andnumerical impulse response functions partially affirmBarro's random walk outcome. Though the behaviors oftax rates, government surpluses, and government debts differ, allocationsare very close for computed Ramsey policies across incomplete and completemarkets economies.
Resumo:
Critical illness is characterised by nutritional and metabolic disorders, resulting in increased muscle catabolism, fat-free mass loss, and hyperglycaemia. The objective of the nutritional support is to limit fat-free mass loss, which has negative consequences on clinical outcome and recovery. Early enteral nutrition is recommended by current guidelines as the first choice feeding route in ICU patients. However, enteral nutrition alone is frequently associated with insufficient coverage of the energy requirements, and subsequently energy deficit is correlated to worsened clinical outcome. Controlled trials have demonstrated that, in case of failure or contraindications to full enteral nutrition, parenteral nutrition administration on top of insufficient enteral nutrition within the first four days after admission could improve the clinical outcome, and may attenuate fat-free mass loss. Parenteral nutrition is cautious if all-in-one solutions are used, glycaemia controlled, and overnutrition avoided. Conversely, the systematic use of parenteral nutrition in the ICU patients without clear indication is not recommended during the first 48 hours. Specific methods, such as thigh ultra-sound imaging, 3rd lumbar vertebra-targeted computerised tomography and bioimpedance electrical analysis, may be helpful in the future to monitor fat-free mass during the ICU stay. Clinical studies are warranted to demonstrate whether an optimal nutritional management during the ICU stay promotes muscle mass and function, the recovery after critical illness and reduces the overall costs.
Resumo:
The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.
Resumo:
Quantum states can be used to encode the information contained in a direction, i.e., in a unit vector. We present the best encoding procedure when the quantum state is made up of N spins (qubits). We find that the quality of this optimal procedure, which we quantify in terms of the fidelity, depends solely on the dimension of the encoding space. We also investigate the use of spatial rotations on a quantum state, which provide a natural and less demanding encoding. In this case we prove that the fidelity is directly related to the largest zeros of the Legendre and Jacobi polynomials. We also discuss our results in terms of the information gain.
Resumo:
OBJECTIVES: We have sought to develop an automated methodology for the continuous updating of optimal cerebral perfusion pressure (CPPopt) for patients after severe traumatic head injury, using continuous monitoring of cerebrovascular pressure reactivity. We then validated the CPPopt algorithm by determining the association between outcome and the deviation of actual CPP from CPPopt. DESIGN: Retrospective analysis of prospectively collected data. SETTING: Neurosciences critical care unit of a university hospital. PATIENTS: A total of 327 traumatic head-injury patients admitted between 2003 and 2009 with continuous monitoring of arterial blood pressure and intracranial pressure. MEASUREMENTS AND MAIN RESULTS: Arterial blood pressure, intracranial pressure, and CPP were continuously recorded, and pressure reactivity index was calculated online. Outcome was assessed at 6 months. An automated curve fitting method was applied to determine CPP at the minimum value for pressure reactivity index (CPPopt). A time trend of CPPopt was created using a moving 4-hr window, updated every minute. Identification of CPPopt was, on average, feasible during 55% of the whole recording period. Patient outcome correlated with the continuously updated difference between median CPP and CPPopt (chi-square=45, p<.001; outcome dichotomized into fatal and nonfatal). Mortality was associated with relative "hypoperfusion" (CPP<CPPopt), severe disability with "hyperperfusion" (CPP>CPPopt), and favorable outcome was associated with smaller deviations of CPP from the individualized CPPopt. While deviations from global target CPP values of 60 mm Hg and 70 mm Hg were also related to outcome, these relationships were less robust. CONCLUSIONS: Real-time CPPopt could be identified during the recording time of majority of the patients. Patients with a median CPP close to CPPopt were more likely to have a favorable outcome than those in whom median CPP was widely different from CPPopt. Deviations from individualized CPPopt were more predictive of outcome than deviations from a common target CPP. CPP management to optimize cerebrovascular pressure reactivity should be the subject of future clinical trial in severe traumatic head-injury patients.