203 resultados para Large Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring and assessing environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods of time. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data manipulation and analysis. We present our current research into techniques for analysing large volumes of acoustic data effectively and efficiently. We provide an overview of a novel online acoustic environmental workbench and discuss a number of approaches to scaling analysis of acoustic data; collaboration, manual, automatic and human-in-the loop analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The vast sequence divergence among different virus groups has presented a great challenge to alignment-based analysis of virus phylogeny. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignment could not be directly applied to the whole-genome comparison and phylogenomic studies of viruses. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among the alignment-free methods, a dynamical language (DL) method proposed by our group has successfully been applied to the phylogenetic analysis of bacteria and chloroplast genomes. Results In this paper, the DL method is used to analyze the whole-proteome phylogeny of 124 large dsDNA viruses and 30 parvoviruses, two data sets with large difference in genome size. The trees from our analyses are in good agreement to the latest classification of large dsDNA viruses and parvoviruses by the International Committee on Taxonomy of Viruses (ICTV). Conclusions The present method provides a new way for recovering the phylogeny of large dsDNA viruses and parvoviruses, and also some insights on the affiliation of a number of unclassified viruses. In comparison, some alignment-free methods such as the CV Tree method can be used for recovering the phylogeny of large dsDNA viruses, but they are not suitable for resolving the phylogeny of parvoviruses with a much smaller genome size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the statistical analyses used to derive bridge live loads models for Hong Kong from a 10-year weigh-in-motion (WIM) data. The statistical concepts required and the terminologies adopted in the development of bridge live load models are introduced. This paper includes studies for representative vehicles from the large amount of WIM data in Hong Kong. Different load affecting parameters such as gross vehicle weights, axle weights, axle spacings, average daily number of trucks etc are first analyzed by various stochastic processes in order to obtain the mathematical distributions of these parameters. As a prerequisite to determine accurate bridge design loadings in Hong Kong, this study not only takes advantages of code formulation methods used internationally but also presents a new method for modelling collected WIM data using a statistical approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Birth weight and length have seasonal fluctuations. Previous analyses of birth weight by latitude effects identified seemingly contradictory results, showing both 6 and 12 monthly periodicities in weight. The aims of this paper are twofold: (a) to explore seasonal patterns in a large, Danish Medical Birth Register, and (b) to explore models based on seasonal exposures and a non-linear exposure-risk relationship. Methods Birth weight and birth lengths on over 1.5 million Danish singleton, live births were examined for seasonality. We modelled seasonal patterns based on linear, U- and J-shaped exposure-risk relationships. We then added an extra layer of complexity by modelling weighted population-based exposure patterns. Results The Danish data showed clear seasonal fluctuations for both birth weight and birth length. A bimodal model best fits the data, however the amplitude of the 6 and 12 month peaks changed over time. In the modelling exercises, U- and J-shaped exposure-risk relationships generate time series with both 6 and 12 month periodicities. Changing the weightings of the population exposure risks result in unexpected properties. A J-shaped exposure-risk relationship with a diminishing population exposure over time fitted the observed seasonal pattern in the Danish birth weight data. Conclusion In keeping with many other studies, Danish birth anthropometric data show complex and shifting seasonal patterns. We speculate that annual periodicities with non-linear exposure-risk models may underlie these findings. Understanding the nature of seasonal fluctuations can help generate candidate exposures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To determine whether primary care management of chronic heart failure (CHF) differed between rural and urban areas in Australia. Design: A cross-sectional survey stratified by Rural, Remote and Metropolitan Areas (RRMA) classification. The primary source of data was the Cardiac Awareness Survey and Evaluation (CASE) study. Setting: Secondary analysis of data obtained from 341 Australian general practitioners and 23 845 adults aged 60 years or more in 1998. Main outcome measures: CHF determined by criteria recommended by the World Health Organization, diagnostic practices, use of pharmacotherapy, and CHF-related hospital admissions in the 12 months before the study. Results: There was a significantly higher prevalence of CHF among general practice patients in large and small rural towns (16.1%) compared with capital city and metropolitan areas (12.4%) (P < 0.001). Echocardiography was used less often for diagnosis in rural towns compared with metropolitan areas (52.0% v 67.3%, P < 0.001). Rates of specialist referral were also significantly lower in rural towns than in metropolitan areas (59.1% v 69.6%, P < 0.001), as were prescribing rates of angiotensin-converting enzyme inhibitors (51.4% v 60.1%, P < 0.001). There was no geographical variation in prescribing rates of β-blockers (12.6% [rural] v 11.8% [metropolitan], P = 0.32). Overall, few survey participants received recommended “evidence-based practice” diagnosis and management for CHF (metropolitan, 4.6%; rural, 3.9%; and remote areas, 3.7%). Conclusions: This study found a higher prevalence of CHF, and significantly lower use of recommended diagnostic methods and pharmacological treatment among patients in rural areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effective implementation of such an ISO 9001 Quality Management System (QMS) in construction companies requires a proper and full implementation of the system to allow companies to improve the way they operate, by this means increasing profitability and market share, producing innovative and sustainable construction products, or improving employee and customer satisfaction. In light of this, this paper discusses the current status of QMS implementation, particularly related to the twenty elements of ISO 9001 within the grade 7 (G-7) category of Indonesian construction companies. A survey was conducted involving 403 respondents from 77 companies, to solicit an evaluation of the current implementation levels of the ISO 9001 elements. The survey findings indicated that for a large percentage of the sector surveyed they had ‘not so fully implemented’ the elements. Scrutiny of the data had also indicated elements that are ‘minimally implemented’, whilst none of the elements fell in the category of ‘fully implemented’. Based on these findings, it is suggested that the G-7 contractors may need to fully commit to practicing control of customer-supplied product and statistical techniques in order to enhance an effective implementation of ISO 9001 elements for ensuring better quality performance. These two elements are recognized as the least implemented of the quality elements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has recently been noted a rapid increase in research attention to projects that involve outside partners. Our knowledge of such inter-organizational projects, however, is limited. This paper reports large scale data from a repeated trend survey amongst 2000 SMEs in 2006 and 2009 that focused on inter-organizational project ventures. Our major findings indicate that the overall prevalence of inter-organizational project ventures remained significant and stable over time, even despite the economic crisis. Moreover, we find that these ventures predominantly solve repetitive rather than unique tasks and are embedded in prior relations between the partnering organizations. These findings provide empirical support for the recent claims that project management should pay more attention to inter-organizational forms of project organization, and suggest that the archetypical view of projects as being unique in every respect should be reconsidered. Both have important implications for project management, especially in the area of project-based learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In response to the need to leverage private finance and the lack of competition in some parts of the Australian public sector major infrastructure market, especially in very large economic infrastructure procured using Pubic Private Partnerships, the Australian Federal government has demonstrated its desire to attract new sources of in-bound foreign direct investment (FDI) into the Australian construction market. This paper aims to report on progress towards an investigation into the determinants of multinational contractors’ willingness to bid for Australian public sector major infrastructure projects and which is designed to give an improved understanding of matters surrounding FDI into the Australian construction sector. This research deploys Dunning’s eclectic theory for the first time in terms of in-bound FDI by multinational contractors and as head contractors bidding for Australian major infrastructure public sector projects. Elsewhere, the authors have developed Dunning’s principal hypothesis associated with his eclectic framework in order to suit the context of this research and to address a weakness arising in Dunning’s principal hypothesis that is based on a nominal approach to the factors in the eclectic framework and which fail to speak to the relative explanatory power of these factors. In this paper, an approach to reviewing and analysing secondary data, as part of the first stage investigation in this research, is developed and some illustrations given, vis-à-vis the selected sector (roads, bridges and tunnels) in Australia (as the host location) and using one of the selected home countries (Spain). In conclusion, some tentative thoughts are offered in anticipation of the completion of the first stage investigation - in terms of the extent to which this first stage based on secondary data only might suggest the relative importance of the factors in the eclectic framework. It is noted that more robust conclusions are expected following the future planned stages of the research and these stages including primary data are briefly outlined. Finally, and beyond theoretical contributions expected from the overall approach taken to developing and testing Dunning’s framework, other expected contributions concerning research method and practical implications are mentioned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researchers are increasingly involved in data-intensive research projects that cut across geographic and disciplinary borders. Quality research now often involves virtual communities of researchers participating in large-scale web-based collaborations, opening their earlystage research to the research community in order to encourage broader participation and accelerate discoveries. The result of such large-scale collaborations has been the production of ever-increasing amounts of data. In short, we are in the midst of a data deluge. Accompanying these developments has been a growing recognition that if the benefits of enhanced access to research are to be realised, it will be necessary to develop the systems and services that enable data to be managed and secured. It has also become apparent that to achieve seamless access to data it is necessary not only to adopt appropriate technical standards, practices and architecture, but also to develop legal frameworks that facilitate access to and use of research data. This chapter provides an overview of the current research landscape in Australia as it relates to the collection, management and sharing of research data. The chapter then explains the Australian legal regimes relevant to data, including copyright, patent, privacy, confidentiality and contract law. Finally, this chapter proposes the infrastructure elements that are required for the proper management of legal interests, ownership rights and rights to access and use data collected or generated by research projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In response to the need to leverage private finance and the lack of competition in some parts of the Australian public sector infrastructure market, especially in the very large economic infrastructure sector procured using Pubic Private Partnerships, the Australian Federal government has demonstrated its desire to attract new sources of in-bound foreign direct investment (FDI). This paper aims to report on progress towards an investigation into the determinants of multinational contractors’ willingness to bid for Australian public sector major infrastructure projects. This research deploys Dunning’s eclectic theory for the first time in terms of in-bound FDI by multinational contractors into Australia. Elsewhere, the authors have developed Dunning’s principal hypothesis to suit the context of this research and to address a weakness arising in this hypothesis that is based on a nominal approach to the factors in Dunning's eclectic framework and which fails to speak to the relative explanatory power of these factors. In this paper, a first stage test of the authors' development of Dunning's hypothesis is presented by way of an initial review of secondary data vis-à-vis the selected sector (roads and bridges) in Australia (as the host location) and with respect to four selected home countries (China; Japan; Spain; and US). In doing so, the next stage in the research method concerning sampling and case studies is also further developed and described in this paper. In conclusion, the extent to which the initial review of secondary data suggests the relative importance of the factors in the eclectic framework is considered. It is noted that more robust conclusions are expected following the future planned stages of the research including primary data from the case studies and a global survey of the world’s largest contractors and which is briefly previewed. Finally, and beyond theoretical contributions expected from the overall approach taken to developing and testing Dunning’s framework, other expected contributions concerning research method and practical implications are mentioned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the medical and healthcare arena, patients‟ data is not just their own personal history but also a valuable large dataset for finding solutions for diseases. While electronic medical records are becoming popular and are used in healthcare work places like hospitals, as well as insurance companies, and by major stakeholders such as physicians and their patients, the accessibility of such information should be dealt with in a way that preserves privacy and security. Thus, finding the best way to keep the data secure has become an important issue in the area of database security. Sensitive medical data should be encrypted in databases. There are many encryption/ decryption techniques and algorithms with regard to preserving privacy and security. Currently their performance is an important factor while the medical data is being managed in databases. Another important factor is that the stakeholders should decide more cost-effective ways to reduce the total cost of ownership. As an alternative, DAS (Data as Service) is a popular outsourcing model to satisfy the cost-effectiveness but it takes a consideration that the encryption/ decryption modules needs to be handled by trustworthy stakeholders. This research project is focusing on the query response times in a DAS model (AES-DAS) and analyses the comparison between the outsourcing model and the in-house model which incorporates Microsoft built-in encryption scheme in a SQL Server. This research project includes building a prototype of medical database schemas. There are 2 types of simulations to carry out the project. The first stage includes 6 databases in order to carry out simulations to measure the performance between plain-text, Microsoft built-in encryption and AES-DAS (Data as Service). Particularly, the AES-DAS incorporates implementations of symmetric key encryption such as AES (Advanced Encryption Standard) and a Bucket indexing processor using Bloom filter. The results are categorised such as character type, numeric type, range queries, range queries using Bucket Index and aggregate queries. The second stage takes the scalability test from 5K to 2560K records. The main result of these simulations is that particularly as an outsourcing model, AES-DAS using the Bucket index shows around 3.32 times faster than a normal AES-DAS under the 70 partitions and 10K record-sized databases. Retrieving Numeric typed data takes shorter time than Character typed data in AES-DAS. The aggregation query response time in AES-DAS is not as consistent as that in MS built-in encryption scheme. The scalability test shows that the DBMS reaches in a certain threshold; the query response time becomes rapidly slower. However, there is more to investigate in order to bring about other outcomes and to construct a secured EMR (Electronic Medical Record) more efficiently from these simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data manipulation and analysis. We present our current research into techniques for analysing large volumes of acoustic data efficiently. We provide an overview of a novel online acoustic environmental workbench and discuss a number of approaches to scaling analysis of acoustic data; online collaboration, manual, automatic and human-in-the loop analysis.