927 resultados para Time-series Analysis
Resumo:
Particle flux in the ocean reflects ongoing biological and geological processes operating under the influence of the local environment. Estimation of this particle flux through sediment trap deployment is constrained by sampler accuracy, particle preservation, and swimmer distortion. Interpretation of specific particle flux is further constrained by indeterminate particle dispersion and the absence of a clear understanding of the sedimentary consequences of ecosystem activity. Nevertheless, the continuous and integrative properties of the particle trap measure, along with the logistic advantage of a long-term moored sampler, provide a set of strategic advantages that appear analogous to those underlying conventional oceanographic survey programs. Emboldened by this perception, several stations along the coast of Southern California and Mexico have been targeted as coastal ocean flux sites (COFS).
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Several snow accumulation time series derived from ice cores and extending over 3 to 5 centuries are examined for spatial and temporal climatic information. ... A significant observation is the widespread depression of net snow accumulation during the latter part of the "Little Ice Age". This initially suggests sea surface temperatures were significantly depressed during the same period. However, prior to this, the available core records indicate generally higher than average precipitation rates. This also implies that influences such as shifted storm tracks or a dustier atmosphere may have been involved. Without additional spatial data coverage, these observations should properly be studied using a coupled (global) ocean/atmosphere GCM.
Resumo:
Much of what we know about the climate of the United States is derived from data gathered under the auspices of the cooperative climate network. Particular aspects of the way observations are taken can have significant influences on the values of climate statistics derived from the data. These influences are briefly reviewed. The purpose of this paper is to examine their effects on climatic time series. Two other items discussed are: (1) a comparison of true (24-hour) means with means derived from maximums and minimums only, and (2) preliminary work on the times of day at which maximums and minimums are set.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Zooplankton biomass and species composition have been sampled since 1985 at a set of standard locations off Vancouver Island. From these data, I have estimated multi-year average seasonal cycles and time series of anomalies from these averages.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Recently, paleoceanographers have been challenged to produce reliable proxies of climate variables that can be incorporated into climate models. In developing proxies using time series of annual radiolarian species fluxes from Santa Barbara Basin, we identify groups of species associated with years of extreme sea surface temperatures and sea level heights.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Our objective is to combine terrestrial and oceanic records for reconstructing West Coast climate. Tree rings and marine laminated sediments provide high-resolution, accurately dated proxy data on the variability of climate and on the productivity of the ocean and have been used to reconstruct precipitation, temperature, sea level pressure, primary productivity, and other large-scale parameters. We present here the latest Santa Barbara basin varve chronology for the twentieth century as well as a newly developed tree-ring chronology for Torrey pine.
Resumo:
In this paper we study parameter estimation for time series with asymmetric α-stable innovations. The proposed methods use a Poisson sum series representation (PSSR) for the asymmetric α-stable noise to express the process in a conditionally Gaussian framework. That allows us to implement Bayesian parameter estimation using Markov chain Monte Carlo (MCMC) methods. We further enhance the series representation by introducing a novel approximation of the series residual terms in which we are able to characterise the mean and variance of the approximation. Simulations illustrate the proposed framework applied to linear time series, estimating the model parameter values and model order P for an autoregressive (AR(P)) model driven by asymmetric α-stable innovations. © 2012 IEEE.
Resumo:
Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.
Resumo:
The accurate prediction of time-changing covariances is an important problem in the modeling of multivariate financial data. However, some of the most popular models suffer from a) overfitting problems and multiple local optima, b) failure to capture shifts in market conditions and c) large computational costs. To address these problems we introduce a novel dynamic model for time-changing covariances. Over-fitting and local optima are avoided by following a Bayesian approach instead of computing point estimates. Changes in market conditions are captured by assuming a diffusion process in parameter values, and finally computationally efficient and scalable inference is performed using particle filters. Experiments with financial data show excellent performance of the proposed method with respect to current standard models.
Resumo:
This work applies a variety of multilinear function factorisation techniques to extract appropriate features or attributes from high dimensional multivariate time series for classification. Recently, a great deal of work has centred around designing time series classifiers using more and more complex feature extraction and machine learning schemes. This paper argues that complex learners and domain specific feature extraction schemes of this type are not necessarily needed for time series classification, as excellent classification results can be obtained by simply applying a number of existing matrix factorisation or linear projection techniques, which are simple and computationally inexpensive. We highlight this using a geometric separability measure and classification accuracies obtained though experiments on four different high dimensional multivariate time series datasets. © 2013 IEEE.
Resumo:
An add-drop filter based on a perfect square resonator can realize a maximum of only 25% power dropping because the confined modes are standing-wave modes. By means of mode coupling between two modes with inverse symmetry properties, a traveling-wave-like filtering response is obtained in a two-dimensional single square cavity filter with cut or circular corners by finite-difference time-domain simulation. The optimized deformation parameters for an add-drop filter can be accurately predicted as the overlapping point of the two coupling modes in an isolated deformed square cavity. More than 80% power dropping can be obtained in a deformed square cavity filter with a side length of 3.01 mu m. The free spectral region is decided by the mode spacing between modes, with the sum of the mode indices differing by 1. (c) 2007 Optical Society of America.