4 resultados para General-purpose computing on graphics processing units (GPGPU)
Resumo:
Graph analytics is an important and computationally demanding class of data analytics. It is essential to balance scalability, ease-of-use and high performance in large scale graph analytics. As such, it is necessary to hide the complexity of parallelism, data distribution and memory locality behind an abstract interface. The aim of this work is to build a scalable graph analytics framework that does not demand significant parallel programming experience based on NUMA-awareness.
The realization of such a system faces two key problems:
(i)~how to develop a scale-free parallel programming framework that scales efficiently across NUMA domains; (ii)~how to efficiently apply graph partitioning in order to create separate and largely independent work items that can be distributed among threads.
Resumo:
Graphics Processing Units (GPUs) are becoming popular accelerators in modern High-Performance Computing (HPC) clusters. Installing GPUs on each node of the cluster is not efficient resulting in high costs and power consumption as well as underutilisation of the accelerator. The research reported in this paper is motivated towards the use of few physical GPUs by providing cluster nodes access to remote GPUs on-demand for a financial risk application. We hypothesise that sharing GPUs between several nodes, referred to as multi-tenancy, reduces the execution time and energy consumed by an application. Two data transfer modes between the CPU and the GPUs, namely concurrent and sequential, are explored. The key result from the experiments is that multi-tenancy with few physical GPUs using sequential data transfers lowers the execution time and the energy consumed, thereby improving the overall performance of the application.
Resumo:
Background: Potentially inappropriate prescribing (PIP) is common in older people in primary care, as evidenced by a significant body of quantitative research. However, relatively few qualitative studies have investigated the phenomenon of PIP and its underlying processes from the perspective of general practitioners (GPs). The aim of this paper is to explore qualitatively, GP perspectives regarding prescribing and PIP in older primary care patients.
Method: Semi-structured qualitative interviews were conducted with GPs participating in a randomised controlled trial (RCT) of an intervention to decrease PIP in older patients (≥70 years) in Ireland. Interviews were conducted with GP participants (both intervention and control) from the OPTI-SCRIPT cluster RCT as part of the trial process evaluation between January and July 2013. Interviews were conducted by one interviewer and audio recorded. Interviews were transcribed verbatim and a thematic analysis was conducted.
Results: Seventeen semi-structured interviews were conducted (13 male; 4 female). Three main, inter-related themes emerged (complex prescribing environment, paternalistic doctor-patient relationship, and relevance of PIP concept). Patient complexity (e.g. polypharmacy, multimorbidity), as well as prescriber complexity (e.g. multiple prescribers, poor communication, restricted autonomy) were all identified as factors contributing to a complex prescribing environment where PIP could occur, as was a paternalistic-doctor patient relationship. The concept of PIP was perceived to be of variable usefulness to GPs and the criteria to measure it may be at odds with the complex processes of prescribing for this patient population.
Conclusions: Several inter-related factors contributing to the occurrence of PIP were identified, some of which may be amenable to intervention. Improvement strategies focused on improved management of polypharmacy and multimorbidity, and communication across primary and secondary care could result in substantial improvements in PIP.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.