1000 resultados para Chorological data


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large sized power transformers are important parts of the power supply chain. These very critical networks of engineering assets are an essential base of a nation’s energy resource infrastructure. This research identifies the key factors influencing transformer normal operating conditions and predicts the asset management lifespan. Engineering asset research has developed few lifespan forecasting methods combining real-time monitoring solutions for transformer maintenance and replacement. Utilizing the rich data source from a remote terminal unit (RTU) system for sensor-data driven analysis, this research develops an innovative real-time lifespan forecasting approach applying logistic regression based on the Weibull distribution. The methodology and the implementation prototype are verified using a data series from 161 kV transformers to evaluate the efficiency and accuracy for energy sector applications. The asset stakeholders and suppliers significantly benefit from the real-time power transformer lifespan evaluation for maintenance and replacement decision support.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To identify the occupational risks for Australian paramedics, by describing the rate of injuries and fatalities and comparing those rates with other reports. Design and participants Retrospective descriptive study using data provided by Safe Work Australia for the period 2000–2010. The subjects were paramedics who had been injured in the course of their duties and for whom a claim had been made for workers compensation payments. Main outcome measures Rates of injury calculated from the data provided. Results The risk of serious injury among Australian paramedics was found to be more than seven times higher than the Australian national average. The fatality rate for paramedics was about six times higher than the national average. On average, every 2 years during the study period, one paramedic died and 30 were seriously injured in vehicle crashes. Ten Australian paramedics were seriously injured each year as a result of an assault. The injury rate for paramedics was more than two times higher than the rate for police officers. Conclusions The high rate of occupational injuries and fatalities among paramedics is a serious public health issue. The risk of injury in Australia is similar to that in the United States. While it may be anticipated that injury rates would be higher as a result of the nature of the work and environment of paramedics, further research is necessary to identify and validate the strategies required to minimise the rates of occupational injury for paramedics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Animal models of critical illness are vital in biomedical research. They provide possibilities for the investigation of pathophysiological processes that may not otherwise be possible in humans. In order to be clinically applicable, the model should simulate the critical care situation realistically, including anaesthesia, monitoring, sampling, utilising appropriate personnel skill mix, and therapeutic interventions. There are limited data documenting the constitution of ideal technologically advanced large animal critical care practices and all the processes of the animal model. In this paper, we describe the procedure of animal preparation, anaesthesia induction and maintenance, physiologic monitoring, data capture, point-of-care technology, and animal aftercare that has been successfully used to study several novel ovine models of critical illness. The relevant investigations are on respiratory failure due to smoke inhalation, transfusion related acute lung injury, endotoxin-induced proteogenomic alterations, haemorrhagic shock, septic shock, brain death, cerebral microcirculation, and artificial heart studies. We have demonstrated the functionality of monitoring practices during anaesthesia required to provide a platform for undertaking systematic investigations in complex ovine models of critical illness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the results of the time dispersion parameters obtained from a set of channel measurements conducted in various environments that are typical of multiuser Infostation application scenarios are presented. The measurement procedure takes into account the practical scenarios typical of the positions and movements of the users in the particular Infostation network. To provide one with the knowledge of how much data can be downloaded by users over a given time and mobile speed, data transfer analysis for multiband orthogonal frequency division multiplexing (MB-OFDM) is presented. As expected, the rough estimate of simultaneous data transfer in a multiuser Infostation scenario indicates dependency of the percentage of download on the data size, number and speed of the users, and the elapse time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rolling-element bearing failures are the most frequent problems in rotating machinery, which can be catastrophic and cause major downtime. Hence, providing advance failure warning and precise fault detection in such components are pivotal and cost-effective. The vast majority of past research has focused on signal processing and spectral analysis for fault diagnostics in rotating components. In this study, a data mining approach using a machine learning technique called anomaly detection (AD) is presented. This method employs classification techniques to discriminate between defect examples. Two features, kurtosis and Non-Gaussianity Score (NGS), are extracted to develop anomaly detection algorithms. The performance of the developed algorithms was examined through real data from a test to failure bearing. Finally, the application of anomaly detection is compared with one of the popular methods called Support Vector Machine (SVM) to investigate the sensitivity and accuracy of this approach and its ability to detect the anomalies in early stages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network topology and routing are two important factors in determining the communication costs of big data applications at large scale. As for a given Cluster, Cloud, or Grid system, the network topology is fixed and static or dynamic routing protocols are preinstalled to direct the network traffic. Users cannot change them once the system is deployed. Hence, it is hard for application developers to identify the optimal network topology and routing algorithm for their applications with distinct communication patterns. In this study, we design a CCG virtual system (CCGVS), which first uses container-based virtualization to allow users to create a farm of lightweight virtual machines on a single host. Then, it uses software-defined networking (SDN) technique to control the network traffic among these virtual machines. Users can change the network topology and control the network traffic programmingly, thereby enabling application developers to evaluate their applications on the same system with different network topologies and routing algorithms. The preliminary experimental results through both synthetic big data programs and NPB benchmarks have shown that CCGVS can represent application performance variations caused by network topology and routing algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. Methods We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Results Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Conclusions Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Many koala populations around Australia are in serious decline, with a substantial component of this decline in some Southeast Queensland populations attributed to the impact of Chlamydia. A Chlamydia vaccine for koalas is in development and has shown promise in early trials. This study contributes to implementation preparedness by simulating vaccination strategies designed to reverse population decline and by identifying which age and sex category it would be most effective to target. METHODS We used field data to inform the development and parameterisation of an individual-based stochastic simulation model of a koala population endemic with Chlamydia. The model took into account transmission, morbidity and mortality caused by Chlamydia infections. We calibrated the model to characteristics of typical Southeast Queensland koala populations. As there is uncertainty about the effectiveness of the vaccine in real-world settings, a variety of potential vaccine efficacies, half-lives and dosing schedules were simulated. RESULTS Assuming other threats remain constant, it is expected that current population declines could be reversed in around 5-6 years if female koalas aged 1-2 years are targeted, average vaccine protective efficacy is 75%, and vaccine coverage is around 10% per year. At lower vaccine efficacies the immunological effects of boosting become important: at 45% vaccine efficacy population decline is predicted to reverse in 6 years under optimistic boosting assumptions but in 9 years under pessimistic boosting assumptions. Terminating a successful vaccination programme at 5 years would lead to a rise in Chlamydia prevalence towards pre-vaccination levels. CONCLUSION For a range of vaccine efficacy levels it is projected that population decline due to endemic Chlamydia can be reversed under realistic dosing schedules, potentially in just 5 years. However, a vaccination programme might need to continue indefinitely in order to maintain Chlamydia prevalence at a sufficiently low level for population growth to continue.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we show implementation results of various algorithms that sort data encrypted with Fully Homomorphic Encryption scheme based on Integers. We analyze the complexities of sorting algorithms over encrypted data by considering Bubble Sort, Insertion Sort, Bitonic Sort and Odd-Even Merge sort. Our complexity analysis together with implementation results show that Odd-Even Merge Sort has better performance than the other sorting techniques. We observe that complexity of sorting in homomorphic domain will always have worst case complexity independent of the nature of input. In addition, we show that combining different sorting algorithms to sort encrypted data does not give any performance gain when compared to the application of sorting algorithms individually.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent trend for journals to require open access to primary data included in publications has been embraced by many biologists, but has caused apprehension amongst researchers engaged in long-term ecological and evolutionary studies. A worldwide survey of 73 principal investigators (Pls) with long-term studies revealed positive attitudes towards sharing data with the agreement or involvement of the PI, and 93% of PIs have historically shared data. Only 8% were in favor of uncontrolled, open access to primary data while 63% expressed serious concern. We present here their viewpoint on an issue that can have non-trivial scientific consequences. We discuss potential costs of public data archiving and provide possible solutions to meet the needs of journals and researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developing innovative library services requires a real world understanding of faculty members' desired curricular goals. This study aimed to develop a comprehensive and deeper understanding of Purdue's nutrition science and political science faculties' expectations for student learning related to information and data information literacies. Course syllabi were examined using grounded theory techniques that allowed us to identify how faculty were addressing information and data information literacies in their courses, but it also enabled us to understand the interconnectedness of these literacies to other departmental intentions for student learning, such as developing a professional identity or learning to conduct original research. The holistic understanding developed through this research provides the necessary information for designing and suggesting information literacy and data information literacy services to departmental faculty in ways supportive of curricular learning outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid advances in sequencing technologies (Next Generation Sequencing or NGS) have led to a vast increase in the quantity of bioinformatics data available, with this increasing scale presenting enormous challenges to researchers seeking to identify complex interactions. This paper is concerned with the domain of transcriptional regulation, and the use of visualisation to identify relationships between specific regulatory proteins (the transcription factors or TFs) and their associated target genes (TGs). We present preliminary work from an ongoing study which aims to determine the effectiveness of different visual representations and large scale displays in supporting discovery. Following an iterative process of implementation and evaluation, representations were tested by potential users in the bioinformatics domain to determine their efficacy, and to understand better the range of ad hoc practices among bioinformatics literate users. Results from two rounds of small scale user studies are considered with initial findings suggesting that bioinformaticians require richly detailed views of TF data, features to compare TF layouts between organisms quickly, and ways to keep track of interesting data points.