962 resultados para Database Time series InfluxDb Platform for TSDB


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi tratta una panoramica generale sui Time Series database e relativi gestori. Successivamente l'attenzione è focalizzata sul DBMS InfluxDB. Infine viene mostrato un progetto che implementa InfluxDB

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Management of groundwater systems requires realistic conceptual hydrogeological models as a framework for numerical simulation modelling, but also for system understanding and communicating this to stakeholders and the broader community. To help overcome these challenges we developed GVS (Groundwater Visualisation System), a stand-alone desktop software package that uses interactive 3D visualisation and animation techniques. The goal was a user-friendly groundwater management tool that could support a range of existing real-world and pre-processed data, both surface and subsurface, including geology and various types of temporal hydrological information. GVS allows these data to be integrated into a single conceptual hydrogeological model. In addition, 3D geological models produced externally using other software packages, can readily be imported into GVS models, as can outputs of simulations (e.g. piezometric surfaces) produced by software such as MODFLOW or FEFLOW. Boreholes can be integrated, showing any down-hole data and properties, including screen information, intersected geology, water level data and water chemistry. Animation is used to display spatial and temporal changes, with time-series data such as rainfall, standing water levels and electrical conductivity, displaying dynamic processes. Time and space variations can be presented using a range of contouring and colour mapping techniques, in addition to interactive plots of time-series parameters. Other types of data, for example, demographics and cultural information, can also be readily incorporated. The GVS software can execute on a standard Windows or Linux-based PC with a minimum of 2 GB RAM, and the model output is easy and inexpensive to distribute, by download or via USB/DVD/CD. Example models are described here for three groundwater systems in Queensland, northeastern Australia: two unconfined alluvial groundwater systems with intensive irrigation, the Lockyer Valley and the upper Condamine Valley, and the Surat Basin, a large sedimentary basin of confined artesian aquifers. This latter example required more detail in the hydrostratigraphy, correlation of formations with drillholes and visualisation of simulation piezometric surfaces. Both alluvial system GVS models were developed during drought conditions to support government strategies to implement groundwater management. The Surat Basin model was industry sponsored research, for coal seam gas groundwater management and community information and consultation. The “virtual” groundwater systems in these 3D GVS models can be interactively interrogated by standard functions, plus production of 2D cross-sections, data selection from the 3D scene, rear end database and plot displays. A unique feature is that GVS allows investigation of time-series data across different display modes, both 2D and 3D. GVS has been used successfully as a tool to enhance community/stakeholder understanding and knowledge of groundwater systems and is of value for training and educational purposes. Projects completed confirm that GVS provides a powerful support to management and decision making, and as a tool for interpretation of groundwater system hydrological processes. A highly effective visualisation output is the production of short videos (e.g. 2–5 min) based on sequences of camera ‘fly-throughs’ and screen images. Further work involves developing support for multi-screen displays and touch-screen technologies, distributed rendering, gestural interaction systems. To highlight the visualisation and animation capability of the GVS software, links to related multimedia hosted online sites are included in the references.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stochastic nature of oil price fluctuations is investigated over a twelve-year period, borrowing feedback from an existing database (USA Energy Information Administration database, available online). We evaluate the scaling exponents of the fluctuations by employing different statistical analysis methods, namely rescaled range analysis (R/S), scale windowed variance analysis (SWV) and the generalized Hurst exponent (GH) method. Relying on the scaling exponents obtained, we apply a rescaling procedure to investigate the complex characteristics of the probability density functions (PDFs) dominating oil price fluctuations. It is found that PDFs exhibit scale invariance, and in fact collapse onto a single curve when increments are measured over microscales (typically less than 30 days). The time evolution of the distributions is well fitted by a Levy-type stable distribution. The relevance of a Levy distribution is made plausible by a simple model of nonlinear transfer. Our results also exhibit a degree of multifractality as the PDFs change and converge toward to a Gaussian distribution at the macroscales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a framework for a telecommunications interface which allows data from sensors embedded in Smart Grid applications to reliably archive data in an appropriate time-series database. The challenge in doing so is two-fold, firstly the various formats in which sensor data is represented, secondly the problems of telecoms reliability. A prototype of the authors' framework is detailed which showcases the main features of the framework in a case study featuring Phasor Measurement Units (PMU) as the application. Useful analysis of PMU data is achieved whenever data from multiple locations can be compared on a common time axis. The prototype developed highlights its reliability, extensibility and adoptability; features which are largely deferred from industry standards for data representation to proprietary database solutions. The open source framework presented provides link reliability for any type of Smart Grid sensor and is interoperable with existing proprietary database systems, and open database systems. The features of the authors' framework allow for researchers and developers to focus on the core of their real-time or historical analysis applications, rather than having to spend time interfacing with complex protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using a sequence of nested multivariate models that are VAR-based, we discuss different layers of restrictions imposed by present-value models (PVM hereafter) on the VAR in levels for series that are subject to present-value restrictions. Our focus is novel - we are interested in the short-run restrictions entailed by PVMs (Vahid and Engle, 1993, 1997) and their implications for forecasting. Using a well-known database, kept by Robert Shiller, we implement a forecasting competition that imposes different layers of PVM restrictions. Our exhaustive investigation of several different multivariate models reveals that better forecasts can be achieved when restrictions are applied to the unrestricted VAR. Moreover, imposing short-run restrictions produces forecast winners 70% of the time for the target variables of PVMs and 63.33% of the time when all variables in the system are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visualization and exploratory analysis is an important part of any data analysis and is made more challenging when the data are voluminous and high-dimensional. One such example is environmental monitoring data, which are often collected over time and at multiple locations, resulting in a geographically indexed multivariate time series. Financial data, although not necessarily containing a geographic component, present another source of high-volume multivariate time series data. We present the mvtsplot function which provides a method for visualizing multivariate time series data. We outline the basic design concepts and provide some examples of its usage by applying it to a database of ambient air pollution measurements in the United States and to a hypothetical portfolio of stocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ongoing zooplankton research at the Plymouth Marine Laboratory has established a time series of zooplankton species since 1988 at L4, a coastal station off Plymouth. Samples were collected by vertical net hauls (WP2 net, mesh 200 µm; UNESCO 1968) from the sea floor (approximately 50 m) to the surface and stored in 4% formalin. Much of the zooplankton analysis has been to the level of "major taxonomic groups" only, and a number of different analysts have participated over the years. The level of expertise has generally been consistent, but the user should be aware that levels of taxonomic discrimination may vary during the course of the dataset. The dominant calanoid copepods are generally well discriminated to species throughout. Calanus has not been routinely examined for species determination, the assumption being that the local population is entirely composed of Calanus helgolandicus. In certain years there has been a particular interest in Temora stylifera, Centropages cherchiae and other species reflected in the dataset. The lack of records in other previous years does not necessarily reflect species absence. We view it as essential for all users of L4 plankton data to establish and maintain contact with the nominated current data originators as well as fully consulting the metadata. While not impinging on free data access, this ensures that this large, species-rich but slightly complex species database is being used in the correct way, and any potential issues with the data are clarified. Furthermore, a proper dialogue with these local experts on the time series will enable where appropriate the most recent sampling timepoints to be used. The data can be downloaded from BODC or from doi:10.1594/PANGAEA.778092 as files for each year by searching for "L4 zooplankton". The most comprehensive dataset is the version downloadable directly from this page. The entire set of zooplankton samples is stored at the Plymouth Marine Laboratory in buffered formalin, and may be available for further taxonomic analysis on request.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The environment of ebb-tidal deltas between barrier island systems is characterized by a complex morphology with ebb- and flood-dominated channels, shoals and swash bars connecting the ebb-tidal delta platform to the adjacent island. These morphological features reveal characteristic surface sediment grain-size distributions and are subject to a continuous adaptation to the prevailing hydrodynamic forces. The mixed-energy tidal inlet Otzumer Balje between the East Frisian barrier islands of Langeoog and Spiekeroog in the southern North Sea has been chosen here as a model study area for the identification of relevant hydrodynamic drivers of morphology and sedimentology. We compare the effect of high-energy, wave-dominated storm conditions to mid-term, tide-dominated fair-weather conditions on tidal inlet morphology and sedimentology with a process-based numerical model. A multi-fractional approach with five grain-size fractions between 150 and 450 µm allows for the simulation of corresponding surface sediment grain-size distributions. Net sediment fluxes for distinct conditions are identified: during storm conditions, bed load sediment transport is generally onshore directed on the shallower ebb-tidal delta shoals, whereas fine-grained suspended sediment bypasses the tidal inlet by wave-driven currents. During fair weather the sediment transport mainly focuses on the inlet throat and the marginal flood channels. We show how the observed sediment grain-size distribution and the morphological response at mixed-energy tidal inlets are the result of both wave-dominated less frequent storm conditions and mid-term, tide-dominant fair-weather conditions.