962 resultados para Database Time series InfluxDb Platform for TSDB
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Arima analysis was used to compute cross-correlations between principal component axes that described environmental variables, chlorophyll concentration and zooplankton density for the Sacramento and San Joaquin rivers and Suisun Bay. ... Cross-correlations among the time series may provide information about links between environmental and biological variables within the estuary and the possible influence of climate.
Resumo:
Particle flux in the ocean reflects ongoing biological and geological processes operating under the influence of the local environment. Estimation of this particle flux through sediment trap deployment is constrained by sampler accuracy, particle preservation, and swimmer distortion. Interpretation of specific particle flux is further constrained by indeterminate particle dispersion and the absence of a clear understanding of the sedimentary consequences of ecosystem activity. Nevertheless, the continuous and integrative properties of the particle trap measure, along with the logistic advantage of a long-term moored sampler, provide a set of strategic advantages that appear analogous to those underlying conventional oceanographic survey programs. Emboldened by this perception, several stations along the coast of Southern California and Mexico have been targeted as coastal ocean flux sites (COFS).
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Several snow accumulation time series derived from ice cores and extending over 3 to 5 centuries are examined for spatial and temporal climatic information. ... A significant observation is the widespread depression of net snow accumulation during the latter part of the "Little Ice Age". This initially suggests sea surface temperatures were significantly depressed during the same period. However, prior to this, the available core records indicate generally higher than average precipitation rates. This also implies that influences such as shifted storm tracks or a dustier atmosphere may have been involved. Without additional spatial data coverage, these observations should properly be studied using a coupled (global) ocean/atmosphere GCM.
Resumo:
Much of what we know about the climate of the United States is derived from data gathered under the auspices of the cooperative climate network. Particular aspects of the way observations are taken can have significant influences on the values of climate statistics derived from the data. These influences are briefly reviewed. The purpose of this paper is to examine their effects on climatic time series. Two other items discussed are: (1) a comparison of true (24-hour) means with means derived from maximums and minimums only, and (2) preliminary work on the times of day at which maximums and minimums are set.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Zooplankton biomass and species composition have been sampled since 1985 at a set of standard locations off Vancouver Island. From these data, I have estimated multi-year average seasonal cycles and time series of anomalies from these averages.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Recently, paleoceanographers have been challenged to produce reliable proxies of climate variables that can be incorporated into climate models. In developing proxies using time series of annual radiolarian species fluxes from Santa Barbara Basin, we identify groups of species associated with years of extreme sea surface temperatures and sea level heights.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): Our objective is to combine terrestrial and oceanic records for reconstructing West Coast climate. Tree rings and marine laminated sediments provide high-resolution, accurately dated proxy data on the variability of climate and on the productivity of the ocean and have been used to reconstruct precipitation, temperature, sea level pressure, primary productivity, and other large-scale parameters. We present here the latest Santa Barbara basin varve chronology for the twentieth century as well as a newly developed tree-ring chronology for Torrey pine.
Resumo:
In this paper we study parameter estimation for time series with asymmetric α-stable innovations. The proposed methods use a Poisson sum series representation (PSSR) for the asymmetric α-stable noise to express the process in a conditionally Gaussian framework. That allows us to implement Bayesian parameter estimation using Markov chain Monte Carlo (MCMC) methods. We further enhance the series representation by introducing a novel approximation of the series residual terms in which we are able to characterise the mean and variance of the approximation. Simulations illustrate the proposed framework applied to linear time series, estimating the model parameter values and model order P for an autoregressive (AR(P)) model driven by asymmetric α-stable innovations. © 2012 IEEE.
Resumo:
Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.
Resumo:
We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/.
Resumo:
The accurate prediction of time-changing covariances is an important problem in the modeling of multivariate financial data. However, some of the most popular models suffer from a) overfitting problems and multiple local optima, b) failure to capture shifts in market conditions and c) large computational costs. To address these problems we introduce a novel dynamic model for time-changing covariances. Over-fitting and local optima are avoided by following a Bayesian approach instead of computing point estimates. Changes in market conditions are captured by assuming a diffusion process in parameter values, and finally computationally efficient and scalable inference is performed using particle filters. Experiments with financial data show excellent performance of the proposed method with respect to current standard models.
Resumo:
This work applies a variety of multilinear function factorisation techniques to extract appropriate features or attributes from high dimensional multivariate time series for classification. Recently, a great deal of work has centred around designing time series classifiers using more and more complex feature extraction and machine learning schemes. This paper argues that complex learners and domain specific feature extraction schemes of this type are not necessarily needed for time series classification, as excellent classification results can be obtained by simply applying a number of existing matrix factorisation or linear projection techniques, which are simple and computationally inexpensive. We highlight this using a geometric separability measure and classification accuracies obtained though experiments on four different high dimensional multivariate time series datasets. © 2013 IEEE.
Resumo:
In modern process industry, it is often difficult to analyze a manufacture process due to its umerous time-series data. Analysts wish to not only interpret the evolution of data over time in a working procedure, but also examine the changes in the whole production process through time. To meet such analytic requirements, we have developed ProcessLine, an interactive visualization tool for a large amount of time-series data in process industry. The data are displayed in a fisheye timeline. ProcessLine provides good overviews for the whole production process and details for the focused working procedure. A preliminary user study using beer industry production data has shown that the tool is effective.