68 resultados para Linear optimization approach
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this paper a new system identification algorithm is introduced for Hammerstein systems based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a non-uniform rational B-spline (NURB) neural network. The proposed system identification algorithm for this NURB network based Hammerstein system consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples including a model based controller are utilized to demonstrate the efficacy of the proposed approach. The controller consists of computing the inverse of the nonlinear static function approximated by NURB network, followed by a linear pole assignment controller.
Resumo:
Using a geometric approach, a composite control—the sum of a slow control and a fast control—is derived for a general class of non-linear singularly perturbed systems. A new and simpler method of composite control design is proposed whereby the fast control is completely designed at the outset. The slow control is then free to be chosen such that the slow integral manifold of the original system approximates a desired design manifold to within any specified order of ε accuracy.
Resumo:
Using a geometric approach, a composite control—the sum of a slow control and a fast control—is derived for a general class of non-linear singularly perturbed systems. A new and simpler method of composite control design is proposed whereby the fast control is completely designed at the outset. The slow control is then free to be chosen such that the slow integral manifold of the original system approximates a desired design manifold to within any specified order of ε accuracy.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. This will damage some of the key properties of the space-time codes and can lead to substantial performance degradation. In this paper, we study the design of linear dispersion codes (LDCs) for such asynchronous cooperative communication networks. Firstly, the concept of conventional LDCs is extended to the delay-tolerant version and new design criteria are discussed. Then we propose a new design method to yield delay-tolerant LDCs that reach the optimal Jensen's upper bound on ergodic capacity as well as minimum average pairwise error probability. The proposed design employs stochastic gradient algorithm to approach a local optimum. Moreover, it is improved by using simulated annealing type optimization to increase the likelihood of the global optimum. The proposed method allows for flexible number of nodes, receive antennas, modulated symbols and flexible length of codewords. Simulation results confirm the performance of the newly-proposed delay-tolerant LDCs.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Resumo:
The decadal predictability of three-dimensional Atlantic Ocean anomalies is examined in a coupled global climate model (HadCM3) using a Linear Inverse Modelling (LIM) approach. It is found that the evolution of temperature and salinity in the Atlantic, and the strength of the meridional overturning circulation (MOC), can be effectively described by a linear dynamical system forced by white noise. The forecasts produced using this linear model are more skillful than other reference forecasts for several decades. Furthermore, significant non-normal amplification is found under several different norms. The regions from which this growth occurs are found to be fairly shallow and located in the far North Atlantic. Initially, anomalies in the Nordic Seas impact the MOC, and the anomalies then grow to fill the entire Atlantic basin, especially at depth, over one to three decades. It is found that the structure of the optimal initial condition for amplification is sensitive to the norm employed, but the initial growth seems to be dominated by MOC-related basin scale changes, irrespective of the choice of norm. The consistent identification of the far North Atlantic as the most sensitive region for small perturbations suggests that additional observations in this region would be optimal for constraining decadal climate predictions.
Resumo:
Moist singular vectors (MSV) have been applied successfully to predicting mid-latitude storms growing in association with latent heat of condensation. Tropical cyclone sensitivity has also been assessed. Extending this approach to more general tropical weather systems here, MSVs are evaluated for understanding and predicting African easterly waves, given the importance of moist processes in their development. First results, without initial moisture perturbations, suggest MSVs may be used advantageously. Perturbations bear similar structural and energy profiles to previous idealised non-linear studies and observations. Strong sensitivities prevail in the metrics and trajectories chosen, and benefits of initial moisture perturbations should be appraised. Copyright © 2009 Royal Meteorological Society
Resumo:
We solve an initial-boundary problem for the Klein-Gordon equation on the half line using the Riemann-Hilbert approach to solving linear boundary value problems advocated by Fokas. The approach we present can be also used to solve more complicated boundary value problems for this equation, such as problems posed on time-dependent domains. Furthermore, it can be extended to treat integrable nonlinearisations of the Klein-Gordon equation. In this respect, we briefly discuss how our results could motivate a novel treatment of the sine-Gordon equation.
Resumo:
This paper presents a new method for the inclusion of nonlinear demand and supply relationships within a linear programming model. An existing method for this purpose is described first and its shortcomings are pointed out before showing how the new approach overcomes those difficulties and how it provides a more accurate and 'smooth' (rather than a kinked) approximation of the nonlinear functions as well as dealing with equilibrium under perfect competition instead of handling just the monopolistic situation. The workings of the proposed method are illustrated by extending a previously available sectoral model for the UK agriculture.
Resumo:
This study presents a new simple approach for combining empirical with raw (i.e., not bias corrected) coupled model ensemble forecasts in order to make more skillful interval forecasts of ENSO. A Bayesian normal model has been used to combine empirical and raw coupled model December SST Niño-3.4 index forecasts started at the end of the preceding July (5-month lead time). The empirical forecasts were obtained by linear regression between December and the preceding July Niño-3.4 index values over the period 1950–2001. Coupled model ensemble forecasts for the period 1987–99 were provided by ECMWF, as part of the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) project. Empirical and raw coupled model ensemble forecasts alone have similar mean absolute error forecast skill score, compared to climatological forecasts, of around 50% over the period 1987–99. The combined forecast gives an increased skill score of 74% and provides a well-calibrated and reliable estimate of forecast uncertainty.
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
Bayesian decision procedures have already been proposed for and implemented in Phase I dose-escalation studies in healthy volunteers. The procedures have been based on pharmacokinetic responses reflecting the concentration of the drug in blood plasma and are conducted to learn about the dose-response relationship while avoiding excessive concentrations. However, in many dose-escalation studies, pharmacodynamic endpoints such as heart rate or blood pressure are observed, and it is these that should be used to control dose-escalation. These endpoints introduce additional complexity into the modeling of the problem relative to pharmacokinetic responses. Firstly, there are responses available following placebo administrations. Secondly, the pharmacodynamic responses are related directly to measurable plasma concentrations, which in turn are related to dose. Motivated by experience of data from a real study conducted in a conventional manner, this paper presents and evaluates a Bayesian procedure devised for the simultaneous monitoring of pharmacodynamic and pharmacokinetic responses. Account is also taken of the incidence of adverse events. Following logarithmic transformations, a linear model is used to relate dose to the pharmacokinetic endpoint and a quadratic model to relate the latter to the pharmacodynamic endpoint. A logistic model is used to relate the pharmacokinetic endpoint to the risk of an adverse event.