872 resultados para Elements, High Trhoughput Data, elettrofisiologia, elaborazione dati, analisi Real Time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the emergence of large-volume and high-speed streaming data, the recent techniques for stream mining of CFIpsilas (closed frequent itemsets) will become inefficient. When concept drift occurs at a slow rate in high speed data streams, the rate of change of information across different sliding windows will be negligible. So, the user wonpsilat be devoid of change in information if we slide window by multiple transactions at a time. Therefore, we propose a novel approach for mining CFIpsilas cumulatively by making sliding width(ges1) over high speed data streams. However, it is nontrivial to mine CFIpsilas cumulatively over stream, because such growth may lead to the generation of exponential number of candidates for closure checking. In this study, we develop an efficient algorithm, stream-close, for mining CFIpsilas over stream by exploring some interesting properties. Our performance study reveals that stream-close achieves good scalability and has promising results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation slides as part of the Janet network end-to-end performance initiative

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supporting presentation slides from the Janet network end to end performance initiative

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The task in text retrieval is to find the subset of a collection of documents relevant to a user's information request, usually expressed as a set of words. Classically, documents and queries are represented as vectors of word counts. In its simplest form, relevance is defined to be the dot product between a document and a query vector--a measure of the number of common terms. A central difficulty in text retrieval is that the presence or absence of a word is not sufficient to determine relevance to a query. Linear dimensionality reduction has been proposed as a technique for extracting underlying structure from the document collection. In some domains (such as vision) dimensionality reduction reduces computational complexity. In text retrieval it is more often used to improve retrieval performance. We propose an alternative and novel technique that produces sparse representations constructed from sets of highly-related words. Documents and queries are represented by their distance to these sets. and relevance is measured by the number of common clusters. This technique significantly improves retrieval performance, is efficient to compute and shares properties with the optimal linear projection operator and the independent components of documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit recent nonparametric asymptotic distributional results, are both easy-to-implement and highly accurate in empirically realistic situations. We also illustrate that properly accounting for the measurement errors in the volatility forecast evaluations reported in the existing literature can result in markedly higher estimates for the true degree of return volatility predictability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this note is to discuss the role of high frequency data in ecological modelling and to identify some of the data requirements for the further development of ecological models for operational oceanography. There is a pressing requirement for the establishment of data acquisition systems for key ecological variables with a high spatial and temporal coverage. Such a system will facilitate the development of operational models. It is envisaged that both in-situ and remotely sensed measurements will need to combined to achieve this aim.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A first-stage collision database is assembled which contains electron-impact excitation, ionization, and recombination rate coefficients for Be, Be+, Be2+, and Be3+. The first-stage database is constructed using the R-matrix with pseudo-states, time-dependent close-coupling, and perturbative, distorted-wave methods. A second-stage collision database is then assembled which contains generalized collisional-radiative and radiated power loss coefficients. The second-stage database is constructed by solution of collisional-radiative equations in the quasi-static equilibrium approximation using the first-stage database. Both collision database stages reside in electronic form at the ORNL Controlled Fusion Atomic Data Center and in the ADAS database, and are easily accessed over the worldwide internet. © 2007 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is dedicated to comparison of open source as well as proprietary transport protocols for highspeed data transmission via IP networks. The contemporary common TCP needs significant improvement since it was developed as general-purpose transport protocol and firstly introduced four decades ago. In nowadays networks, TCP fits not all communication needs that society has. Caused of it another transport protocols have been developed and successfully used for e.g. Big Data movement. In scope of this research the following protocols have been investigated for its efficiency on 10Gbps links: UDT, RBUDP, MTP and RWTP. The protocols were tested under different impairments such as Round Trip Time up to 400 ms and packet losses up to 2%. Investigated parameters are the data rate under different conditions of the network, the CPU load by sender andreceiver during the experiments, size of feedback data, CPU usage per Gbps and the amount of feedback data per GiByte of effectively transmitted data. The best performance and fair resources consumption was observed by RWTP. From the opensource projects, the best behavior is showed by RBUDP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.