124 resultados para subset sum problems
Resumo:
We discuss solvability issues of ℍ -/ℍ 2/∞ optimal fault detection problems in the most general setting. A solution approach is presented which successively reduces the initial problem to simpler ones. The last computational step generally may involve the solution of a non-standard ℍ -/ ℍ 2/∞ optimization problem for which we discuss possible solution approaches. Using an appropriate definition of the ℍ -- index, we provide a complete solution of this problem in the case of ℍ 2-norm. Furthermore, we discuss the solvability issues in the case of ℍ ∞-norm. © 2011 IEEE.
Resumo:
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.
Resumo:
This research aims to develop a capabilities-based conceptual framework in order to study the stage-specific innovation problems associated with the dynamic growth process of university spin-outs (hereafter referred to as USOs) in China. Based on the existing literature, pilot cases and five critical cases, this study attempts to explore the interconnections between the entrepreneurial innovation problems and the configuration of innovative capabilities (that acquire, mobilise and re-configure the key resources) throughout the lifecycle of a firm in four growth phases. This paper aims to contribute to the literature in a holistic manner by providing a theoretical discussion of USOs' development through adding evidence from a rapid growth emerging economy. To date, studies that have investigated the development of USOs in China recognised the heterogeneity of USOs in terms of capabilities still remain sparse. Addressing this research gap will be of great interest to entrepreneurs, policy makers and venture investors. © Copyright 2010 Inderscience Enterprises Ltd.
Resumo:
There are many methods for decomposing signals into a sum of amplitude and frequency modulated sinusoids. In this paper we take a new estimation based approach. Identifying the problem as ill-posed, we show how to regularize the solution by imposing soft constraints on the amplitude and phase variables of the sinusoids. Estimation proceeds using a version of Kalman smoothing. We evaluate the method on synthetic and natural, clean and noisy signals, showing that it outperforms previous decompositions, but at a higher computational cost. © 2012 IEEE.
Resumo:
When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations. Copyright 2012 by the author(s)/owner(s).
Resumo:
This paper presents explicit solutions for a class of decentralized LQG problems in which players communicate their states with delays. A method for decomposing the Bellman equation into a hierarchy of independent subproblems is introduced. Using this decomposition, all of the gains for the optimal controller are computed from the solution of a single algebraic Riccati equation. © 2012 AACC American Automatic Control Council).
Resumo:
Free software and open source projects are often perceived to be of high quality. It has been suggested that the high level of quality found in some free software projects is related to the open development model which promotes peer review. While the quality of some free software projects is comparable to, if not better than, that of closed source software, not all free software projects are successful and of high quality. Even mature and successful projects face quality problems; some of these are related to the unique characteristics of free software and open source as a distributed development model led primarily by volunteers. In exploratory interviews performed with free software and open source developers, several common quality practices as well as actual quality problems have been identified. The results of these interviews are presented in this paper in order to take stock of the current status of quality in free software projects and to act as a starting point for the implementation of quality process improvement strategies.
Resumo:
Underground space is commonly exploited both to maximise the utility of costly land in urban development and to reduce the vertical load acting on the ground. Deep excavations are carried out to construct various types of underground infrastructure such as deep basements, subways and service tunnels. Although the soil response to excavation is known in principle, designers lack practical calculation methods for predicting both short- and long-term ground movements. As the understanding of how soil behaves around an excavation in both the short and long term is insufficient and usually empirical, the judgements used in design are also empirical and serious accidents are common. To gain a better understanding of the mechanisms involved in soil excavation, a new apparatus for the centrifuge model testing of deep excavations in soft clay has been developed. This apparatus simulates the field construction sequence of a multi-propped retaining wall during centrifuge flight. A comparison is given between the new technique and the previously used method of draining heavy fluid to simulate excavation in a centrifuge model. The new system has the benefit of giving the correct initial ground conditions before excavation and the proper earth pressure distribution on the retaining structures during excavation, whereas heavy fluid only gives an earth pressure coefficient of unity and is unable to capture any changes in the earth pressure coefficient of soil inside the zone of excavation, for example owing to wall movements. Settlements of the ground surface, changes in pore water pressure, variations in earth pressure, prop forces and bending moments in the retaining wall are all monitored during excavation. Furthermore, digital images taken of a cross-section during the test are analysed using particle image velocimetry to illustrate ground deformation and soil–structure interaction mechanisms. The significance of these observations is discussed.
Resumo:
Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.
Resumo:
We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = Y Y T , where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X. It is thus very effective for solving problems that have a low-rank solution. The factorization X = Y Y T leads to a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a second-order optimization method with guaranteed quadratic convergence. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. In contrast to existing methods, the proposed algorithm converges monotonically to the sought solution. Its numerical efficiency is evaluated on two applications: the maximal cut of a graph and the problem of sparse principal component analysis. © 2010 Society for Industrial and Applied Mathematics.
Resumo:
Coupled hydrology and water quality models are an important tool today, used in the understanding and management of surface water and watershed areas. Such problems are generally subject to substantial uncertainty in parameters, process understanding, and data. Component models, drawing on different data, concepts, and structures, are affected differently by each of these uncertain elements. This paper proposes a framework wherein the response of component models to their respective uncertain elements can be quantified and assessed, using a hydrological model and water quality model as two exemplars. The resulting assessments can be used to identify model coupling strategies that permit more appropriate use and calibration of individual models, and a better overall coupled model response. One key finding was that an approximate balance of water quality and hydrological model responses can be obtained using both the QUAL2E and Mike11 water quality models. The balance point, however, does not support a particularly narrow surface response (or stringent calibration criteria) with respect to the water quality calibration data, at least in the case examined here. Additionally, it is clear from the results presented that the structural source of uncertainty is at least as significant as parameter-based uncertainties in areal models. © 2012 John Wiley & Sons, Ltd.