12 resultados para datasets

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tree index structure is a traditional method for searching similar data in large datasets. It is based on the presupposition that most sub-trees are pruned in the searching process. As a result, the number of page accesses is reduced. However, time-series datasets generally have a very high dimensionality. Because of the so-called dimensionality curse, the pruning effectiveness is reduced in high dimensionality. Consequently, the tree index structure is not a suitable method for time-series datasets. In this paper, we propose a two-phase (filtering and refinement) method for searching time-series datasets. In the filtering step, a quantizing time-series is used to construct a compact file which is scanned for filtering out irrelevant. A small set of candidates is translated to the second step for refinement. In this step, we introduce an effective index compression method named grid-based datawise dimensionality reduction (DRR) which attempts to preserve the characteristics of the time-series. An experimental comparison with existing techniques demonstrates the utility of our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microarray data provides quantitative information about the transcription profile of cells. To analyze microarray datasets, methodology of machine learning has increasingly attracted bioinformatics researchers. Some approaches of machine learning are widely used to classify and mine biological datasets. However, many gene expression datasets are extremely high dimensionality, traditional machine learning methods can not be applied effectively and efficiently. This paper proposes a robust algorithm to find out rule groups to classify gene expression datasets. Unlike the most classification algorithms, which select dimensions (genes) heuristically to form rules groups to identify classes such as cancerous and normal tissues, our algorithm guarantees finding out best-k dimensions (genes), which are most discriminative to classify samples in different classes, to form rule groups for the classification of expression datasets. Our experiments show that the rule groups obtained by our algorithm have higher accuracy than that of other classification approaches

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most real-world datasets are, to a certain degree, skewed. When considered that they are also large, they become the pinnacle challenge in data analysis. More importantly, we cannot ignore such datasets as they arise frequently in a wide variety of applications. Regardless of the analytic, it is often that the effectiveness of analysis can be improved if the characteristic of the dataset is known in advance. In this paper, we propose a novel technique to preprocess such datasets to obtain this insight. Our work is inspired by the resonance phenomenon, where similar objects resonate to a given response function. The key analytic result of our work is the data terrain, which shows properties of the dataset to enable effective and efficient analysis. We demonstrated our work in the context of various real-world problems. In doing so, we establish it as the tool for preprocessing data before applying computationally expensive algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microarray data provides quantitative information about the transcription profile of cells. To analyse microarray datasets, methodology of machine learning has increasingly attracted bioinformatics researchers. Some approaches of machine learning are widely used to classify and mine biological datasets. However, many gene expression datasets are extremely high dimensionality, traditional machine learning methods cannot be applied effectively and efficiently. This paper proposes a robust algorithm to find out rule groups to classify gene expression datasets. Unlike the most classification algorithms, which select dimensions (genes) heuristically to form rules groups to identify classes such as cancerous and normal tissues, our algorithm guarantees finding out best-k dimensions (genes) to form rule groups for the classification of expression datasets. Our experiments show that the rule groups obtained by our algorithm have higher accuracy than that of other classification approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new type of discriminative subgraph pattern called breaker emerging subgraph pattern by introducing three constraints and two new concepts: base and breaker. A breaker emerging sub-graph pattern consists of three subpatterns: a con-strained emerging subgraph pattern, a set of bases and a set of breakers. An efficient approach is pro-posed for the discovery of top-k breaker emerging sub-graph patterns from graph datasets. Experimental re-sults show that the approach is capable of efficiently discovering top-k breaker emerging subgraph patterns from given datasets, is more efficient than two previ-ous methods for mining discriminative subgraph pat-terns. The discovered top-k breaker emerging sub-graph patterns are more informative, more discrim-inative, more accurate and more compact than the minimal distinguishing subgraph patterns. The top-k breaker emerging patterns are more useful for sub-structure analysis, such as molecular fragment analy-sis. © 2009, Australian Computer Society, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

 Background: Efficient and reliable surveillance and notification systems are vital for monitoring public health and disease outbreaks. However, most surveillance and notification systems are affected by a degree of underestimation (UE) and therefore uncertainty surrounds the 'true' incidence of disease affecting morbidity and mortality rates. Surveillance systems fail to capture cases at two distinct levels of the surveillance pyramid: from the community since not all cases seek healthcare (under-ascertainment), and at the healthcare-level, representing a failure to adequately report symptomatic cases that have sought medical advice (underreporting). There are several methods to estimate the extent of under-ascertainment and underreporting. Methods. Within the context of the ECDC-funded Burden of Communicable Diseases in Europe (BCoDE)-project, an extensive literature review was conducted to identify studies that estimate ascertainment or reporting rates for salmonellosis and campylobacteriosis in European Union Member States (MS) plus European Free Trade Area (EFTA) countries Iceland, Norway and Switzerland and four other OECD countries (USA, Canada, Australia and Japan). Multiplication factors (MFs), a measure of the magnitude of underestimation, were taken directly from the literature or derived (where the proportion of underestimated, under-ascertained, or underreported cases was known) and compared for the two pathogens. Results: MFs varied between and within diseases and countries, representing a need to carefully select the most appropriate MFs and methods for calculating them. The most appropriate MFs are often disease-, country-, age-, and sex-specific. Conclusions: When routine data are used to make decisions on resource allocation or to estimate epidemiological parameters in populations, it becomes important to understand when, where and to what extent these data represent the true picture of disease, and in some instances (such as priority setting) it is necessary to adjust for underestimation. MFs can be used to adjust notification and surveillance data to provide more realistic estimates of incidence. © 2014 Gibbons et al.; licensee BioMed Central Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In big data analysis, frequent itemsets mining plays a key role in mining associations, correlations and causality. Since some traditional frequent itemsets mining algorithms are unable to handle massive small files datasets effectively, such as high memory cost, high I/O overhead, and low computing performance, we propose a novel parallel frequent itemsets mining algorithm based on the FP-Growth algorithm and discuss its applications in this paper. First, we introduce a small files processing strategy for massive small files datasets to compensate defects of low read-write speed and low processing efficiency in Hadoop. Moreover, we use MapReduce to redesign the FP-Growth algorithm for implementing parallel computing, thereby improving the overall performance of frequent itemsets mining. Finally, we apply the proposed algorithm to the association analysis of the data from the national college entrance examination and admission of China. The experimental results show that the proposed algorithm is feasible and valid for a good speedup and a higher mining efficiency, and can meet the actual requirements of frequent itemsets mining for massive small files datasets. © 2014 ISSN 2185-2766.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper provides a novel Exceptional Object Analysis for Finding Rare Environmental Events (EOAFREE). The major contribution of our EOAFREE method is that it proposes a general Improved Exceptional Object Analysis based on Noises (IEOAN) algorithm to efficiently detect and rank exceptional objects. Our IEOAN algorithm is more general than already known outlier detection algorithms to find exceptional objects that may be not on the border; and experimental study shows that our IEOAN algorithm is far more efficient than directly recursively using already known clustering algorithms that may not force every data instance to belong to a cluster to detect rare events. Another contribution is that it provides an approach to preprocess heterogeneous real world data through exploring domain knowledge, based on which it defines changes instead of the water data value itself as the input of the IEOAN algorithm to remove the geographical differences between any two sites and the temporal differences between any two years. The effectiveness of our EOAFREE method is demonstrated by a real world application - that is, to detect water pollution events from the water quality datasets of 93 sites distributed in 10 river basins in Victoria, Australia between 1975 and 2010. © 2012 Elsevier B.V..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Massive computation power and storage capacity of cloud computing systems enable users to either store large generated scientific datasets in the cloud or delete and then regenerate them whenever reused. Due to the pay-as-you-go model, the more datasets we store, the more storage cost we need to pay, alternatively, we can delete some generated datasets to save the storage cost but more computation cost is incurred for regeneration whenever the datasets are reused. Hence, there should exist a trade-off between computation and storage in the cloud, where different storage strategies lead to different total costs. The minimum cost, which reflects the best trade-off, is an important benchmark for evaluating the cost-effectiveness of different storage strategies. However, the current benchmarking approach is neither efficient nor practical to be applied on the fly at runtime. In this paper, we propose a novel Partitioned Solution Space based approach with efficient algorithms for dynamic yet practical on-the-fly minimum cost benchmarking of storing generated datasets in the cloud. In this approach, we pre-calculate all the possible minimum cost storage strategies and save them in different partitioned solution spaces. The minimum cost storage strategy represents the minimum cost benchmark, and whenever the datasets storage cost changes at runtime in the cloud (e.g. new datasets are generated and/or existing datasets' usage frequencies are changed), our algorithms can efficiently retrieve the current minimum cost storage strategy from the partitioned solution space and update the benchmark. By dynamically keeping the benchmark updated, our approach can be practically utilised on the fly at runtime in the cloud, based on which the minimum cost benchmark can be either proactively reported or instantly responded upon request. Case studies and experimental results based on Amazon cloud show the efficiency, scalability and practicality of our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proliferation of cloud computing allows users to flexibly store, re-compute or transfer large generated datasets with multiple cloud service providers. However, due to the pay-As-you-go model, the total cost of using cloud services depends on the consumption of storage, computation and bandwidth resources which are three key factors for the cost of IaaS-based cloud resources. In order to reduce the total cost for data, given cloud service providers with different pricing models on their resources, users can flexibly choose a cloud service to store a generated dataset, or delete it and choose a cloud service to regenerate it whenever reused. However, finding the minimum cost is a complicated yet unsolved problem. In this paper, we propose a novel algorithm that can calculate the minimum cost for storing and regenerating datasets in clouds, i.e. whether datasets should be stored or deleted, and furthermore where to store or to regenerate whenever they are reused. This minimum cost also achieves the best trade-off among computation, storage and bandwidth costs in multiple clouds. Comprehensive analysis and rigid theorems guarantee the theoretical soundness of the paper, and general (random) simulations conducted with popular cloud service providers' pricing models demonstrate the excellent performance of our approach.