25 resultados para mining data streams
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
NanoStreams is a consortium project funded by the European Commission under its FP7 programme and is a major effort to address the challenges of processing vast amounts of data in real-time, with a markedly lower carbon footprint than the state of the art. The project addresses both the energy challenge and the high-performance required by emerging applications in real-time streaming data analytics. NanoStreams achieves this goal by designing and building disruptive micro-server solutions incorporating real-silicon prototype micro-servers based on System-on-Chip and reconfigurable hardware technologies.
Resumo:
A power and resource efficient ‘dynamic-range utilisation’ technique to increase operational capacity of DSP IP cores by exploiting redundancy in the data epresentation of sampled analogue input data, is presented. By cleverly partitioning dynamic-range into separable processing threads, several data streams are computed concurrently on the same hardware. Unlike existing techniques which act solely to reduce power consumption due to sign extension, here the dynamic range is exploited to increase operational capacity while still achieving reduced power consumption. This extends an existing system-level, power efficient framework for the design of low power DSP IP cores, which when applied to the design of an FFT IP core in a digital receiver system gives an architecture requiring 50% fewer multipliers, 12% fewer slices and 51%-56% less power.
Resumo:
An orthogonal vector approach is proposed for the synthesis of multi-beam directional modulation (DM) transmitters. These systems have the capability of concurrently projecting independent data streams into different specified spatial directions while simultaneously distorting signal constellations in all other directions. Simulated bit error rate (BER) spatial distributions are presented for various multi-beam system configurations in order to illustrate representative examples of physical layer security performance enhancement that can be achieved.
Resumo:
A 10 GHz Fourier Rotman lens enabled dynamic directional modulation (DM) transmitter is experimentally evaluated. Bit error rate (BER) performance is obtained via real-time data transmission. It is shown that Fourier Rotman DM functionality enhances system security performance in terms of narrower decodable low BER region and higher BER values associated with BER sidelobes especially under high signal to noise ratio (SNR) scenarios. This enhancement is achieved by controlled corruption of constellation diagrams in IQ space by orthogonal injection of interference. Furthermore, the paper gives the first report of a functional dual-beam DM transmitter, which has the capability of simultaneously projecting two independent data streams into two different spatial directions while simultaneously scrambling the information signals along all other directions.
Resumo:
Compensation for the dynamic response of a temperature sensor usually involves the estimation of its input on the basis of the measured output and model parameters. In the case of temperature measurement, the sensor dynamic response is strongly dependent on the measurement environment and fluid velocity. Estimation of time-varying sensor model parameters therefore requires continuous textit{in situ} identification. This can be achieved by employing two sensors with different dynamic properties, and exploiting structural redundancy to deduce the sensor models from the resulting data streams. Most existing approaches to this problem assume first-order sensor dynamics. In practice, however second-order models are more reflective of the dynamics of real temperature sensors, particularly when they are encased in a protective sheath. As such, this paper presents a novel difference equation approach to solving the blind identification problem for sensors with second-order models. The approach is based on estimating an auxiliary ARX model whose parameters are related to the desired sensor model parameters through a set of coupled non-linear algebraic equations. The ARX model can be estimated using conventional system identification techniques and the non-linear equations can be solved analytically to yield estimates of the sensor models. Simulation results are presented to demonstrate the efficiency of the proposed approach under various input and parameter conditions.
Resumo:
Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.
Resumo:
The upcoming IEEE 802.11ac standard boosts the throughput of previous IEEE 802.11n by adding wider 80 MHz and 160 MHz channels with up to 8 antennas (versus 40 MHz channel and 4 antennas in 802.11n). This necessitates new 1-8 stream 256/512-point Fast Fourier Transform (FFT) / inverse FFT (IFFT) processing with 80/160 MSample/s throughput. Although there are abundant related work, they all fail to meet the requirements of IEEE 802.11ac FFT/IFFT on point size, throughput and multiple data streams at the same time. This paper proposes the first software defined FFT/IFFT architecture as a solution. By making use of a customised soft stream processor on FPGA, we show how a software defined FFT architecture can meet all the requirements of IEEE 802.11ac with low cost and high resource efficiency. When compared with dedicated Xilinx FFT core, our implementation exhibits only one third of the resources also up to three times of resource efficiency.
Resumo:
With Tweet volumes reaching 500 million a day, sampling is inevitable for any application using Twitter data. Realizing this, data providers such as Twitter, Gnip and Boardreader license sampled data streams priced in accordance with the sample size. Big Data applications working with sampled data would be interested in working with a large enough sample that is representative of the universal dataset. Previous work focusing on the representativeness issue has considered ensuring the global occurrence rates of key terms, be reliably estimated from the sample. Present technology allows sample size estimation in accordance with probabilistic bounds on occurrence rates for the case of uniform random sampling. In this paper, we consider the problem of further improving sample size estimates by leveraging stratification in Twitter data. We analyze our estimates through an extensive study using simulations and real-world data, establishing the superiority of our method over uniform random sampling. Our work provides the technical know-how for data providers to expand their portfolio to include stratified sampled datasets, whereas applications are benefited by being able to monitor more topics/events at the same data and computing cost.
Resumo:
In the last decade, data mining has emerged as one of the most dynamic and lively areas in information technology. Although many algorithms and techniques for data mining have been proposed, they either focus on domain independent techniques or on very specific domain problems. A general requirement in bridging the gap between academia and business is to cater to general domain-related issues surrounding real-life applications, such as constraints, organizational factors, domain expert knowledge, domain adaption, and operational knowledge. Unfortunately, these either have not been addressed, or have not been sufficiently addressed, in current data mining research and development.Domain-Driven Data Mining (D3M) aims to develop general principles, methodologies, and techniques for modeling and merging comprehensive domain-related factors and synthesized ubiquitous intelligence surrounding problem domains with the data mining process, and discovering knowledge to support business decision-making. This paper aims to report original, cutting-edge, and state-of-the-art progress in D3M. It covers theoretical and applied contributions aiming to: 1) propose next-generation data mining frameworks and processes for actionable knowledge discovery, 2) investigate effective (automated, human and machine-centered and/or human-machined-co-operated) principles and approaches for acquiring, representing, modelling, and engaging ubiquitous intelligence in real-world data mining, and 3) develop workable and operational systems balancing technical significance and applications concerns, and converting and delivering actionable knowledge into operational applications rules to seamlessly engage application processes and systems.
Resumo:
Background. The assembly of the tree of life has seen significant progress in recent years but algae and protists have been largely overlooked in this effort. Many groups of algae and protists have ancient roots and it is unclear how much data will be required to resolve their phylogenetic relationships for incorporation in the tree of life. The red algae, a group of primary photosynthetic eukaryotes of more than a billion years old, provide the earliest fossil evidence for eukaryotic multicellularity and sexual reproduction. Despite this evolutionary significance, their phylogenetic relationships are understudied. This study aims to infer a comprehensive red algal tree of life at the family level from a supermatrix containing data mined from GenBank. We aim to locate remaining regions of low support in the topology, evaluate their causes and estimate the amount of data required to resolve them. Results. Phylogenetic analysis of a supermatrix of 14 loci and 98 red algal families yielded the most complete red algal tree of life to date. Visualization of statistical support showed the presence of five poorly supported regions. Causes for low support were identified with statistics about the age of the region, data availability and node density, showing that poor support has different origins in different parts of the tree. Parametric simulation experiments yielded optimistic estimates of how much data will be needed to resolve the poorly supported regions (ca. 103 to ca. 104 nucleotides for the different regions). Nonparametric simulations gave a markedly more pessimistic image, some regions requiring more than 2.8 105 nucleotides or not achieving the desired level of support at all. The discrepancies between parametric and nonparametric simulations are discussed in light of our dataset and known attributes of both approaches. Conclusions. Our study takes the red algae one step closer to meaningful inclusion in the tree of life. In addition to the recovery of stable relationships, the recognition of five regions in need of further study is a significant outcome of this work. Based on our analyses of current availability and future requirements of data, we make clear recommendations for forthcoming research.
Resumo:
We conducted data-mining analyses of genome wide association (GWA) studies of the CATIE and MGS-GAIN datasets, and found 13 markers in the two physically linked genes, PTPN21 and EML5, showing nominally significant association with schizophrenia. Linkage disequilibrium (LD) analysis indicated that all 7 markers from PTPN21 shared high LD (r(2)>0.8), including rs2274736 and rs2401751, the two non-synonymous markers with the most significant association signals (rs2401751, P=1.10 × 10(-3) and rs2274736, P=1.21 × 10(-3)). In a meta-analysis of all 13 replication datasets with a total of 13,940 subjects, we found that the two non-synonymous markers are significantly associated with schizophrenia (rs2274736, OR=0.92, 95% CI: 0.86-0.97, P=5.45 × 10(-3) and rs2401751, OR=0.92, 95% CI: 0.86-0.97, P=5.29 × 10(-3)). One SNP (rs7147796) in EML5 is also significantly associated with the disease (OR=1.08, 95% CI: 1.02-1.14, P=6.43 × 10(-3)). These 3 markers remain significant after Bonferroni correction. Furthermore, haplotype conditioned analyses indicated that the association signals observed between rs2274736/rs2401751 and rs7147796 are statistically independent. Given the results that 2 non-synonymous markers in PTPN21 are associated with schizophrenia, further investigation of this locus is warranted.