975 resultados para Hydrothermal scheduling problems


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss solvability issues of ℍ -/ℍ 2/∞ optimal fault detection problems in the most general setting. A solution approach is presented which successively reduces the initial problem to simpler ones. The last computational step generally may involve the solution of a non-standard ℍ -/ ℍ 2/∞ optimization problem for which we discuss possible solution approaches. Using an appropriate definition of the ℍ -- index, we provide a complete solution of this problem in the case of ℍ 2-norm. Furthermore, we discuss the solvability issues in the case of ℍ ∞-norm. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

POMDP algorithms have made significant progress in recent years by allowing practitioners to find good solutions to increasingly large problems. Most approaches (including point-based and policy iteration techniques) operate by refining a lower bound of the optimal value function. Several approaches (e.g., HSVI2, SARSOP, grid-based approaches and online forward search) also refine an upper bound. However, approximating the optimal value function by an upper bound is computationally expensive and therefore tightness is often sacrificed to improve efficiency (e.g., sawtooth approximation). In this paper, we describe a new approach to efficiently compute tighter bounds by i) conducting a prioritized breadth first search over the reachable beliefs, ii) propagating upper bound improvements with an augmented POMDP and iii) using exact linear programming (instead of the sawtooth approximation) for upper bound interpolation. As a result, we can represent the bounds more compactly and significantly reduce the gap between upper and lower bounds on several benchmark problems. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research aims to develop a capabilities-based conceptual framework in order to study the stage-specific innovation problems associated with the dynamic growth process of university spin-outs (hereafter referred to as USOs) in China. Based on the existing literature, pilot cases and five critical cases, this study attempts to explore the interconnections between the entrepreneurial innovation problems and the configuration of innovative capabilities (that acquire, mobilise and re-configure the key resources) throughout the lifecycle of a firm in four growth phases. This paper aims to contribute to the literature in a holistic manner by providing a theoretical discussion of USOs' development through adding evidence from a rapid growth emerging economy. To date, studies that have investigated the development of USOs in China recognised the heterogeneity of USOs in terms of capabilities still remain sparse. Addressing this research gap will be of great interest to entrepreneurs, policy makers and venture investors. © Copyright 2010 Inderscience Enterprises Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two near-ultraviolet (UV) sensors based on solution-grown zinc oxide (ZnO) nanowires (NWs) which are only sensitive to photo-excitation at or below 400 nm wavelength have been fabricated and characterized. Both devices keep all processing steps, including nanowire growth, under 100 °C for compatibility with a wide variety of substrates. The first device type uses a single optical lithography step process to allow simultaneous in situ horizontal NW growth from solution and creation of symmetric ohmic contacts to the nanowires. The second device type uses a two-mask optical lithography process to create asymmetric ohmic and Schottky contacts. For the symmetric ohmic contacts, at a voltage bias of 1 V across the device, we observed a 29-fold increase in current in comparison to dark current when the NWs were photo-excited by a 400 nm light-emitting diode (LED) at 0.15 mW cm(-2) with a relaxation time constant (τ) ranging from 50 to 555 s. For the asymmetric ohmic and Schottky contacts under 400 nm excitation, τ is measured between 0.5 and 1.4 s over varying time internals, which is ~2 orders of magnitude faster than the devices using symmetric ohmic contacts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Papermaking is considered as an energy-intensive industry partly due to the fact that the machinery and procedures have been designed at the time when energy was both cheap and plentiful. A typical paper machine manufactures a variety of different products (grades) which impose variable per-unit raw material and energy costs to the mill. It is known that during a grade change operation the products are not market-worthy. Therefore, two different production regimes, i.e. steady state and grade transition can be recognised in papermaking practice. Among the costs associated with paper manufacture, the energy cost is 'more variable' due to (usually) day-to-day variations of the energy prices. Moreover, the production of a grade is often constrained by customer delivery time requirements. Given the above constraints and production modes, the product scheduling technique proposed in this paper aims at optimising the sequence of orders in a single machine so that the cost of production (mainly determined by the energy) is minimised. Simulation results obtained from a commercial board machine in the UK confirm the effectiveness of the proposed method. © 2011 IFAC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents explicit solutions for a class of decentralized LQG problems in which players communicate their states with delays. A method for decomposing the Bellman equation into a hierarchy of independent subproblems is introduced. Using this decomposition, all of the gains for the optimal controller are computed from the solution of a single algebraic Riccati equation. © 2012 AACC American Automatic Control Council).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Networked control systems (NCSs) have attracted much attention in the past decade due to their many advantages and growing number of applications. Different than classic control systems, resources in NCSs, such as network bandwidth and communication energy, are often limited, which degrade the closed-loop system performance and may even cause the system to become unstable. Seeking a desired trade-off between the closed-loop system performance and the limited resources is thus one heated area of research. In this paper, we analyze the trade-off between the sensor-to-controller communication rate and the closed-loop system performance indexed by the conventional LQG control cost. We present and compare several sensor data schedules, and demonstrate that two event-based sensor data schedules provide better trade-off than an optimal offline schedule. Simulation examples are provided to illustrate the theories developed in the paper. © 2012 AACC American Automatic Control Council).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Free software and open source projects are often perceived to be of high quality. It has been suggested that the high level of quality found in some free software projects is related to the open development model which promotes peer review. While the quality of some free software projects is comparable to, if not better than, that of closed source software, not all free software projects are successful and of high quality. Even mature and successful projects face quality problems; some of these are related to the unique characteristics of free software and open source as a distributed development model led primarily by volunteers. In exploratory interviews performed with free software and open source developers, several common quality practices as well as actual quality problems have been identified. The results of these interviews are presented in this paper in order to take stock of the current status of quality in free software projects and to act as a starting point for the implementation of quality process improvement strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Underground space is commonly exploited both to maximise the utility of costly land in urban development and to reduce the vertical load acting on the ground. Deep excavations are carried out to construct various types of underground infrastructure such as deep basements, subways and service tunnels. Although the soil response to excavation is known in principle, designers lack practical calculation methods for predicting both short- and long-term ground movements. As the understanding of how soil behaves around an excavation in both the short and long term is insufficient and usually empirical, the judgements used in design are also empirical and serious accidents are common. To gain a better understanding of the mechanisms involved in soil excavation, a new apparatus for the centrifuge model testing of deep excavations in soft clay has been developed. This apparatus simulates the field construction sequence of a multi-propped retaining wall during centrifuge flight. A comparison is given between the new technique and the previously used method of draining heavy fluid to simulate excavation in a centrifuge model. The new system has the benefit of giving the correct initial ground conditions before excavation and the proper earth pressure distribution on the retaining structures during excavation, whereas heavy fluid only gives an earth pressure coefficient of unity and is unable to capture any changes in the earth pressure coefficient of soil inside the zone of excavation, for example owing to wall movements. Settlements of the ground surface, changes in pore water pressure, variations in earth pressure, prop forces and bending moments in the retaining wall are all monitored during excavation. Furthermore, digital images taken of a cross-section during the test are analysed using particle image velocimetry to illustrate ground deformation and soil–structure interaction mechanisms. The significance of these observations is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.