956 resultados para Local linear
Resumo:
The validity of approximating radiative heating rates in the middle atmosphere by a local linear relaxation to a reference temperature state (i.e., ‘‘Newtonian cooling’’) is investigated. Using radiative heating rate and temperature output from a chemistry–climate model with realistic spatiotemporal variability and realistic chemical and radiative parameterizations, it is found that a linear regressionmodel can capture more than 80% of the variance in longwave heating rates throughout most of the stratosphere and mesosphere, provided that the damping rate is allowed to vary with height, latitude, and season. The linear model describes departures from the climatological mean, not from radiative equilibrium. Photochemical damping rates in the upper stratosphere are similarly diagnosed. Threeimportant exceptions, however, are found.The approximation of linearity breaks down near the edges of the polar vortices in both hemispheres. This nonlinearity can be well captured by including a quadratic term. The use of a scale-independentdamping rate is not well justified in the lower tropical stratosphere because of the presence of a broad spectrum of vertical scales. The local assumption fails entirely during the breakup of the Antarctic vortex, where large fluctuations in temperature near the top of the vortex influence longwave heating rates within the quiescent region below. These results are relevant for mechanistic modeling studies of the middle atmosphere, particularly those investigating the final Antarctic warming.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
We consider the local order estimation of nonlinear autoregressive systems with exogenous inputs (NARX), which may have different local dimensions at different points. By minimizing the kernel-based local information criterion introduced in this paper, the strongly consistent estimates for the local orders of the NARX system at points of interest are obtained. The modification of the criterion and a simple procedure of searching the minimum of the criterion, are also discussed. The theoretical results derived here are tested by simulation examples.
Resumo:
A utilização de Estabilizadores de Sistemas de Potência (ESP), para amortecer oscilações eletromecânicas de pequena magnitude e baixa freqüência, é cada vez mais importante na operação dos modernos sistemas elétricos. Estabilizadores convencionais, com estrutura e parâmetros fixos, têm sido utilizados com essa finalidade há algumas décadas, porém existem regiões de operação do sistema nas quais esses estabilizadores lineares não são tão eficientes, especialmente quando comparados com estabilizadores projetados através de modernas técnicas de controle. Um ESP Neural, treinado a partir de um conjunto de controladores lineares locais, é utilizado para investigar em quais regiões de operação do sistema elétrico o desempenho do estabilizador a parâmetros fixos é deteriorada. O melhor desempenho do ESP Neural nessas regiões de operação, quando comparado com o ESP convencional, é demonstrado através de simulações digitais não-lineares de um sistema do tipo máquina síncrona conectada a um barramento infinito e de um sistema com quatro geradores.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
Resumo:
The low-surface-brightness galaxies are gas rich and yet have a low star formation rate; this is a well-known puzzle. The spiral features in these galaxies are weak and difficult to trace, although this aspect has not been studied much. These galaxies are known to be dominated by the dark matter halo from the innermost regions. Here, we do a stability analysis for the galactic disc of UGC 7321, a low-surface-brightness, superthin galaxy, for which the various observational input parameters are available. We show that the disc is stable against local, linear axisymmetric and non-axisymmetric perturbations. The Toomre Q parameter values are found to be large (>> 1) mainly due to the low disc surface density, and the high rotation velocity resulting due to the dominant dark matter halo, which could explain the observed low star formation rate. For the stars-alone case, the disc shows finite swing amplification but the addition of dark matter halo suppresses that amplification almost completely. Even the inclusion of the low-dispersion gas which constitutes a high disc mass fraction does not help in causing swing amplification. This can explain why these galaxies do not show strong spiral features. Thus, the dynamical effect of a halo that is dominant from inner regions can naturally explain why star formation and spiral features are largely suppressed in low-surface-brightness galaxies, making these different from the high-surface-brightness galaxies.
Resumo:
In standard Gaussian Process regression input locations are assumed to be noise free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local linear expansion about each input point. This allows the input noise to be recast as output noise proportional to the squared gradient of the GP posterior mean. The input noise variances are inferred from the data as extra hyperparameters. They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which alternates between optimising the hyperparameters and calculating the posterior gradient. Analytic predictive moments can then be found for Gaussian distributed test points. We compare our model to others over a range of different regression problems and show that it improves over current methods.
Resumo:
针对欧式距离下局部嵌入映射近邻点选择的缺陷,从其定义出发,引入切空间距离,改进了近邻点的选择方法,从而能够更好的满足LLE对于局部线性的要求。论文用剩余方差测试其性能,通过对S-curve数据和Swiss-roll数据的仿真可以看到,基于切空间距离的方法能够更好的表示数据的输入/输出映射质量。
Resumo:
In this paper, a Radial Basis Function neural network based AVR is proposed. A control strategy which generates local linear models from a global neural model on-line is used to derive controller feedback gains based on the Generalised Minimum Variance technique. Testing is carried out on a micromachine system which enables evaluation of practical implementation of the scheme. Constraints imposed by gathering training data, computational load, and memory requirements for the training algorithm are addressed.
Resumo:
The aim of this thesis is to narrow the gap between two different control techniques: the continuous control and the discrete event control techniques DES. This gap can be reduced by the study of Hybrid systems, and by interpreting as Hybrid systems the majority of large-scale systems. In particular, when looking deeply into a process, it is often possible to identify interaction between discrete and continuous signals. Hybrid systems are systems that have both continuous, and discrete signals. Continuous signals are generally supposed continuous and differentiable in time, since discrete signals are neither continuous nor differentiable in time due to their abrupt changes in time. Continuous signals often represent the measure of natural physical magnitudes such as temperature, pressure etc. The discrete signals are normally artificial signals, operated by human artefacts as current, voltage, light etc. Typical processes modelled as Hybrid systems are production systems, chemical process, or continuos production when time and continuous measures interacts with the transport, and stock inventory system. Complex systems as manufacturing lines are hybrid in a global sense. They can be decomposed into several subsystems, and their links. Another motivation for the study of Hybrid systems is the tools developed by other research domains. These tools benefit from the use of temporal logic for the analysis of several properties of Hybrid systems model, and use it to design systems and controllers, which satisfies physical or imposed restrictions. This thesis is focused in particular types of systems with discrete and continuous signals in interaction. That can be modelled hard non-linealities, such as hysteresis, jumps in the state, limit cycles, etc. and their possible non-deterministic future behaviour expressed by an interpretable model description. The Hybrid systems treated in this work are systems with several discrete states, always less than thirty states (it can arrive to NP hard problem), and continuous dynamics evolving with expression: with Ki ¡ Rn constant vectors or matrices for X components vector. In several states the continuous evolution can be several of them Ki = 0. In this formulation, the mathematics can express Time invariant linear system. By the use of this expression for a local part, the combination of several local linear models is possible to represent non-linear systems. And with the interaction with discrete events of the system the model can compose non-linear Hybrid systems. Especially multistage processes with high continuous dynamics are well represented by the proposed methodology. Sate vectors with more than two components, as third order models or higher is well approximated by the proposed approximation. Flexible belt transmission, chemical reactions with initial start-up and mobile robots with important friction are several physical systems, which profits from the benefits of proposed methodology (accuracy). The motivation of this thesis is to obtain a solution that can control and drive the Hybrid systems from the origin or starting point to the goal. How to obtain this solution, and which is the best solution in terms of one cost function subject to the physical restrictions and control actions is analysed. Hybrid systems that have several possible states, different ways to drive the system to the goal and different continuous control signals are problems that motivate this research. The requirements of the system on which we work is: a model that can represent the behaviour of the non-linear systems, and that possibilities the prediction of possible future behaviour for the model, in order to apply an supervisor which decides the optimal and secure action to drive the system toward the goal. Specific problems can be determined by the use of this kind of hybrid models are: - The unity of order. - Control the system along a reachable path. - Control the system in a safe path. - Optimise the cost function. - Modularity of control The proposed model solves the specified problems in the switching models problem, the initial condition calculus and the unity of the order models. Continuous and discrete phenomena are represented in Linear hybrid models, defined with defined eighth-tuple parameters to model different types of hybrid phenomena. Applying a transformation over the state vector : for LTI system we obtain from a two-dimensional SS a single parameter, alpha, which still maintains the dynamical information. Combining this parameter with the system output, a complete description of the system is obtained in a form of a graph in polar representation. Using Tagaki-Sugeno type III is a fuzzy model which include linear time invariant LTI models for each local model, the fuzzyfication of different LTI local model gives as a result a non-linear time invariant model. In our case the output and the alpha measure govern the membership function. Hybrid systems control is a huge task, the processes need to be guided from the Starting point to the desired End point, passing a through of different specific states and points in the trajectory. The system can be structured in different levels of abstraction and the control in three layers for the Hybrid systems from planning the process to produce the actions, these are the planning, the process and control layer. In this case the algorithms will be applied to robotics ¡V a domain where improvements are well accepted ¡V it is expected to find a simple repetitive processes for which the extra effort in complexity can be compensated by some cost reductions. It may be also interesting to implement some control optimisation to processes such as fuel injection, DC-DC converters etc. In order to apply the RW theory of discrete event systems on a Hybrid system, we must abstract the continuous signals and to project the events generated for these signals, to obtain new sets of observable and controllable events. Ramadge & Wonham¡¦s theory along with the TCT software give a Controllable Sublanguage of the legal language generated for a Discrete Event System (DES). Continuous abstraction transforms predicates over continuous variables into controllable or uncontrollable events, and modifies the set of uncontrollable, controllable observable and unobservable events. Continuous signals produce into the system virtual events, when this crosses the bound limits. If this event is deterministic, they can be projected. It is necessary to determine the controllability of this event, in order to assign this to the corresponding set, , controllable, uncontrollable, observable and unobservable set of events. Find optimal trajectories in order to minimise some cost function is the goal of the modelling procedure. Mathematical model for the system allows the user to apply mathematical techniques over this expression. These possibilities are, to minimise a specific cost function, to obtain optimal controllers and to approximate a specific trajectory. The combination of the Dynamic Programming with Bellman Principle of optimality, give us the procedure to solve the minimum time trajectory for Hybrid systems. The problem is greater when there exists interaction between adjacent states. In Hybrid systems the problem is to determine the partial set points to be applied at the local models. Optimal controller can be implemented in each local model in order to assure the minimisation of the local costs. The solution of this problem needs to give us the trajectory to follow the system. Trajectory marked by a set of set points to force the system to passing over them. Several ways are possible to drive the system from the Starting point Xi to the End point Xf. Different ways are interesting in: dynamic sense, minimum states, approximation at set points, etc. These ways need to be safe and viable and RchW. And only one of them must to be applied, normally the best, which minimises the proposed cost function. A Reachable Way, this means the controllable way and safe, will be evaluated in order to obtain which one minimises the cost function. Contribution of this work is a complete framework to work with the majority Hybrid systems, the procedures to model, control and supervise are defined and explained and its use is demonstrated. Also explained is the procedure to model the systems to be analysed for automatic verification. Great improvements were obtained by using this methodology in comparison to using other piecewise linear approximations. It is demonstrated in particular cases this methodology can provide best approximation. The most important contribution of this work, is the Alpha approximation for non-linear systems with high dynamics While this kind of process is not typical, but in this case the Alpha approximation is the best linear approximation to use, and give a compact representation.
Resumo:
Using annual observations on industrial production over the last three centuries, and on GDP over a 100-year period, we seek an historical perspective on the forecastability of these UK output measures. The series are dominated by strong upward trends, so we consider various specifications of this, including the local linear trend structural time-series model, which allows the level and slope of the trend to vary. Our results are not unduly sensitive to how the trend in the series is modelled: the average sizes of the forecast errors of all models, and the wide span of prediction intervals, attests to a great deal of uncertainty in the economic environment. It appears that, from an historical perspective, the postwar period has been relatively more forecastable.
Resumo:
This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification
Resumo:
A numerical study of mass conservation of MAC-type methods is presented, for viscoelastic free-surface flows. We use an implicit formulation which allows for greater time steps, and therefore time marching schemes for advecting the free surface marker particles have to be accurate in order to preserve the good mass conservation properties of this methodology. We then present an improvement by using a Runge-Kutta scheme coupled with a local linear extrapolation on the free surface. A thorough study of the viscoelastic impacting drop problem, for both Oldroyd-B and XPP fluid models, is presented, investigating the influence of timestep, grid spacing and other model parameters to the overall mass conservation of the method. Furthermore, an unsteady fountain flow is also simulated to illustrate the low mass conservation error obtained.