924 resultados para non-stationary loads


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nonlinear, non-stationary signals are commonly found in a variety of disciplines such as biology, medicine, geology and financial modeling. The complexity (e.g. nonlinearity and non-stationarity) of such signals and their low signal to noise ratios often make it a challenging task to use them in critical applications. In this paper we propose a new neural network based technique to address those problems. We show that a feed forward, multi-layered neural network can conveniently capture the states of a nonlinear system in its connection weight-space, after a process of supervised training. The performance of the proposed method is investigated via computer simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Online model order complexity estimation remains one of the key problems in neural network research. The problem is further exacerbated in situations where the underlying system generator is non-stationary. In this paper, we introduce a novelty criterion for resource allocating networks (RANs) which is capable of being applied to both stationary and slowly varying non-stationary problems. The deficiencies of existing novelty criteria are discussed and the relative performances are demonstrated on two real-world problems : electricity load forecasting and exchange rate prediction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper consides the problem of extracting the relationships between two time series in a non-linear non-stationary environment with Hidden Markov Models (HMMs). We describe an algorithm which is capable of identifying associations between variables. The method is applied both to synthetic data and real data. We show that HMMs are capable of modelling the oil drilling process and that they outperform existing methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models (HMMs) to identify the lag (or delay) between different variables for such data. We first present a method using maximum likelihood estimation and propose a simple algorithm which is capable of identifying associations between variables. We also adopt an information-theoretic approach and develop a novel procedure for training HMMs to maximise the mutual information between delayed time series. Both methods are successfully applied to real data. We model the oil drilling process with HMMs and estimate a crucial parameter, namely the lag for return.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models to identify the lag (or delay) between different variables for such data. Adopting an information-theoretic approach, we develop a procedure for training HMMs to maximise the mutual information (MMI) between delayed time series. The method is used to model the oil drilling process. We show that cross-correlation gives no information and that the MMI approach outperforms maximum likelihood.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This Letter addresses image segmentation via a generative model approach. A Bayesian network (BNT) in the space of dyadic wavelet transform coefficients is introduced to model texture images. The model is similar to a Hidden Markov model (HMM), but with non-stationary transitive conditional probability distributions. It is composed of discrete hidden variables and observable Gaussian outputs for wavelet coefficients. In particular, the Gabor wavelet transform is considered. The introduced model is compared with the simplest joint Gaussian probabilistic model for Gabor wavelet coefficients for several textures from the Brodatz album [1]. The comparison is based on cross-validation and includes probabilistic model ensembles instead of single models. In addition, the robustness of the models to cope with additive Gaussian noise is investigated. We further study the feasibility of the introduced generative model for image segmentation in the novelty detection framework [2]. Two examples are considered: (i) sea surface pollution detection from intensity images and (ii) image segmentation of the still images with varying illumination across the scene.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the major problems associated with communication via a loudspeaking telephone (LST) is that, using analogue processing, duplex transmission is limited to low-loss lines and produces a low acoustic output. An architectural for an instrument has been developed and tested, which uses digital signal processing to provide duplex transmission between a LST and a telopnone handset over most of the B.T. network. Digital adaptive-filters are used in the duplex LST to cancel coupling between the loudspeaker and microphone, and across the transmit to receive paths of the 2-to-4-wire converter. Normal movement of a person in the acoustic path causes a loss of stability by increasing the level of coupling from the loudspeaker to the microphone, since there is a lag associated the adaptive filters learning about a non-stationary path, Control of the loop stability and the level of sidetone heard by the hadset user is by a microprocessoe, which continually monitors the system and regulates the gain. The result is a system which offers the best compromise available based on a set of measured parameters.A theory has been developed which gives the loop stability requirements based on the error between the parameters of the filter and those of the unknown path. The programme to develope a low-cost adaptive filter in LST produced a low-cost adaptive filter in LST produced a unique architecture which has a number of features not available in any similar system. These include automatic compensation for the rate of adaptation over a 36 dB range of output level, , 4 rates of adaptation (with a maximum of 465 dB/s), plus the ability to cascade up to 4 filters without loss o performance. A complex story has been developed to determine the adptation which can be achieved using finite-precision arithmatic. This enabled the development of an architecture which distributed the normalisation required to achieve optimum rate of adaptation over the useful input range. Comparison of theory and measurement for the adaptive filter show very close agreement. A single experimental LST was built and tested on connections to hanset telephones over the BT network. The LST demonstrated that duplex transmission was feasible using signal processing and produced a more comfortable means of communication beween people than methods emplying deep voice-switching to regulate the local-loop gain. Although, with the current level of processing power, it is not a panacea and attention must be directed toward the physical acoustic isolation between loudspeaker and microphone.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Amongst all the objectives in the study of time series, uncovering the dynamic law of its generation is probably the most important. When the underlying dynamics are not available, time series modelling consists of developing a model which best explains a sequence of observations. In this thesis, we consider hidden space models for analysing and describing time series. We first provide an introduction to the principal concepts of hidden state models and draw an analogy between hidden Markov models and state space models. Central ideas such as hidden state inference or parameter estimation are reviewed in detail. A key part of multivariate time series analysis is identifying the delay between different variables. We present a novel approach for time delay estimating in a non-stationary environment. The technique makes use of hidden Markov models and we demonstrate its application for estimating a crucial parameter in the oil industry. We then focus on hybrid models that we call dynamical local models. These models combine and generalise hidden Markov models and state space models. Probabilistic inference is unfortunately computationally intractable and we show how to make use of variational techniques for approximating the posterior distribution over the hidden state variables. Experimental simulations on synthetic and real-world data demonstrate the application of dynamical local models for segmenting a time series into regimes and providing predictive distributions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis deals with the background, development and description of a mathematical stock control methodology for use within an oil and chemical blending company, where demand and replenishment lead-times are generally non-stationary. The stock control model proper relies on, as input, adaptive forecasts of demand determined for an economical forecast/replenishment period precalculated on an individual stock-item basis. The control procedure is principally that of the continuous review, reorder level type, where the reorder level and reorder quantity 'float', that is, each changes in accordance with changes in demand. Two versions of the Methodology are presented; a cost minimisation version and a service level version. Realising the importance of demand forecasts, four recognised variations of the Trigg and Leach adaptive forecasting routine are examined. A fifth variation, developed, is proposed as part of the stock control methodology. The results of testing the cost minimisation version of the Methodology with historical data, by means of a computerised simulation, are presented together with a description of the simulation used. The performance of the Methodology is in addition compared favourably to a rule-of-thumb approach considered by the Company as an interim solution for reducing stack levels. The contribution of the work to the field of scientific stock control is felt to be significant for the following reasons:- (I) The Methodology is designed specifically for use with non-stationary demand and for this reason alone appears to be unique. (2) The Methodology is unique in its approach and the cost-minimisation version is shown to work successfully with the demand data presented. (3) The Methodology and the thesis as a whole fill an important gap between complex mathematical stock control theory and practical application. A brief description of a computerised order processing/stock monitoring system, designed and implemented as a pre-requisite for the Methodology's practical operation, is presented as an appendix.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Baths containing sulphuric acid as catalyst and others with selected secondary catalysts (methane sulphonic acid - MSA, SeO2, a KBrO3/KIO3 mixture, indium, uranium and commercial high speed catalysts (HEEF-25 and HEEF-405)) were studied. The secondary catalysts influenced CCE, brightness and cracking. Chromium deposition mechanisms were studied in Part II using potentiostatic and potentiodynamic electroanalytical techniques under stationary and hydrodynamic conditions. Sulphuric acid as a primary catalyst and MSA, HEEF-25, HEEF-405 and sulphosalycilic acid as co-catalysts were explored for different rotation, speeds and scan rates. Maximum current was resolved into diffusion and kinetically limited components, and a contribution towards understanding the electrochemical mechanism is proposed. Reaction kinetics were further studied for H2SO4, MSA and methane disulphonic acid catalysed systems and their influence on reaction mechanisms elaborated. Charge transfer coefficient and electrochemical reaction rate orders for the first stage of the electrodeposition process were determined. A contribution was made toward understanding of H2SO4 and MSA influence on the evolution rate of hydrogen. Anodic dissolution of chromium in the chromic acid solution was studied with a number of techniques. An electrochemical dissolution mechanism is proposed, based on the results of rotating gold ring disc experiments and scanning electron microscopy. Finally, significant increases in chromium electrodeposition rates under non-stationary conditions (PRC mode) were studied and a deposition mechanisms is elaborated based on experimental data and theoretical considerations.