953 resultados para C32 - Time-Series Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In everyday life different flows of customers to avail some service facility or other at some service station are experienced. In some of these situations, congestion of items arriving for service, because an item cannot be serviced Immediately on arrival, is unavoidable. A queuing system can be described as customers arriving for service, waiting for service if it is not immediate, and if having waited for service, leaving the system after being served. Examples Include shoppers waiting in front of checkout stands in a supermarket, Programs waiting to be processed by a digital computer, ships in the harbor Waiting to be unloaded, persons waiting at railway booking office etc. A queuing system is specified completely by the following characteristics: input or arrival pattern, service pattern, number of service channels, System capacity, queue discipline and number of service stages. The ultimate objective of solving queuing models is to determine the characteristics that measure the performance of the system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical methods of analysing time series by Box-Jenkins approach assume that the observed series uctuates around changing levels with constant variance. That is, the time series is assumed to be of homoscedastic nature. However, the nancial time series exhibits the presence of heteroscedasticity in the sense that, it possesses non-constant conditional variance given the past observations. So, the analysis of nancial time series, requires the modelling of such variances, which may depend on some time dependent factors or its own past values. This lead to introduction of several classes of models to study the behaviour of nancial time series. See Taylor (1986), Tsay (2005), Rachev et al. (2007). The class of models, used to describe the evolution of conditional variances is referred to as stochastic volatility modelsThe stochastic models available to analyse the conditional variances, are based on either normal or log-normal distributions. One of the objectives of the present study is to explore the possibility of employing some non-Gaussian distributions to model the volatility sequences and then study the behaviour of the resulting return series. This lead us to work on the related problem of statistical inference, which is the main contribution of the thesis

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have applied time series analytical techniques to the flux of lava from an extrusive eruption. Tilt data acting as a proxy for flux are used in a case study of the May–August 1997 period of the eruption at Soufrière Hills Volcano, Montserrat. We justify the use of such a proxy by simple calibratory arguments. Three techniques of time series analysis are employed: spectral, spectrogram and wavelet methods. In addition to the well-known ~9-hour periodicity shown by these data, a previously unknown periodic flux variability is revealed by the wavelet analysis as a 3-day cycle of frequency modulation during June–July 1997, though the physical mechanism responsible is not clear. Such time series analysis has potential for other lava flux proxies at other types of volcanoes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper exploits a structural time series approach to model the time pattern of multiple and resurgent food scares and their direct and cross-product impacts on consumer response. A structural time series Almost Ideal Demand System (STS-AIDS) is embedded in a vector error correction framework to allow for dynamic effects (VEC-STS-AIDS). Italian aggregate household data on meat demand is used to assess the time-varying impact of a resurgent BSE crisis (1996 and 2000) and the 1999 Dioxin crisis. The VEC-STS-AIDS model monitors the short-run impacts and performs satisfactorily in terms of residuals diagnostics, overcoming the major problems encountered by the customary vector error correction approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tremor is a clinical feature characterized by oscillations of a part of the body. The detection and study of tremor is an important step in investigations seeking to explain underlying control strategies of the central nervous system under natural (or physiological) and pathological conditions. It is well established that tremorous activity is composed of deterministic and stochastic components. For this reason, the use of digital signal processing techniques (DSP) which take into account the nonlinearity and nonstationarity of such signals may bring new information into the signal analysis which is often obscured by traditional linear techniques (e.g. Fourier analysis). In this context, this paper introduces the application of the empirical mode decomposition (EMD) and Hilbert spectrum (HS), which are relatively new DSP techniques for the analysis of nonlinear and nonstationary time-series, for the study of tremor. Our results, obtained from the analysis of experimental signals collected from 31 patients with different neurological conditions, showed that the EMD could automatically decompose acquired signals into basic components, called intrinsic mode functions (IMFs), representing tremorous and voluntary activity. The identification of a physical meaning for IMFs in the context of tremor analysis suggests an alternative and new way of detecting tremorous activity. These results may be relevant for those applications requiring automatic detection of tremor. Furthermore, the energy of IMFs was visualized as a function of time and frequency by means of the HS. This analysis showed that the variation of energy of tremorous and voluntary activity could be distinguished and characterized on the HS. Such results may be relevant for those applications aiming to identify neurological disorders. In general, both the HS and EMD demonstrated to be very useful to perform objective analysis of any kind of tremor and can therefore be potentially used to perform functional assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A predictability index was defined as the ratio of the variance of the optimal prediction to the variance of the original time series by Granger and Anderson (1976) and Bhansali (1989). A new simplified algorithm for estimating the predictability index is introduced and the new estimator is shown to be a simple and effective tool in applications of predictability ranking and as an aid in the preliminary analysis of time series. The relationship between the predictability index and the position of the poles and lag p of a time series which can be modelled as an AR(p) model are also investigated. The effectiveness of the algorithm is demonstrated using numerical examples including an application to stock prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new structure of Radial Basis Function (RBF) neural network called the Dual-orthogonal RBF Network (DRBF) is introduced for nonlinear time series prediction. The hidden nodes of a conventional RBF network compare the Euclidean distance between the network input vector and the centres, and the node responses are radially symmetrical. But in time series prediction where the system input vectors are lagged system outputs, which are usually highly correlated, the Euclidean distance measure may not be appropriate. The DRBF network modifies the distance metric by introducing a classification function which is based on the estimation data set. Training the DRBF networks consists of two stages. Learning the classification related basis functions and the important input nodes, followed by selecting the regressors and learning the weights of the hidden nodes. In both cases, a forward Orthogonal Least Squares (OLS) selection procedure is applied, initially to select the important input nodes and then to select the important centres. Simulation results of single-step and multi-step ahead predictions over a test data set are included to demonstrate the effectiveness of the new approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of various statistical models and commonly used financial indicators for forecasting securitised real estate returns are examined for five European countries: the UK, Belgium, the Netherlands, France and Italy. Within a VAR framework, it is demonstrated that the gilt-equity yield ratio is in most cases a better predictor of securitized returns than the term structure or the dividend yield. In particular, investors should consider in their real estate return models the predictability of the gilt-equity yield ratio in Belgium, the Netherlands and France, and the term structure of interest rates in France. Predictions obtained from the VAR and univariate time-series models are compared with the predictions of an artificial neural network model. It is found that, whilst no single model is universally superior across all series, accuracy measures and horizons considered, the neural network model is generally able to offer the most accurate predictions for 1-month horizons. For quarterly and half-yearly forecasts, the random walk with a drift is the most successful for the UK, Belgian and Dutch returns and the neural network for French and Italian returns. Although this study underscores market context and forecast horizon as parameters relevant to the choice of the forecast model, it strongly indicates that analysts should exploit the potential of neural networks and assess more fully their forecast performance against more traditional models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying a periodic time-series model from environmental records, without imposing the positivity of the growth rate, does not necessarily respect the time order of the data observations. Consequently, subsequent observations, sampled in the environmental archive, can be inversed on the time axis, resulting in a non-physical signal model. In this paper an optimization technique with linear constraints on the signal model parameters is proposed that prevents time inversions. The activation conditions for this constrained optimization are based upon the physical constraint of the growth rate, namely, that it cannot take values smaller than zero. The actual constraints are defined for polynomials and first-order splines as basis functions for the nonlinear contribution in the distance-time relationship. The method is compared with an existing method that eliminates the time inversions, and its noise sensitivity is tested by means of Monte Carlo simulations. Finally, the usefulness of the method is demonstrated on the measurements of the vessel density, in a mangrove tree, Rhizophora mucronata, and the measurement of Mg/Ca ratios, in a bivalve, Mytilus trossulus.