8 resultados para virtualised GPU

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research aims at improving the accessibility of cluster computer systems by introducing autonomic self-management facilities incorporating; 1) resource discovery and self awareness, 2) virtualised resource pools, and 3) automated cluster membership and self configuration. These facilities simplify the user's programming workload and improve system usability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an approach to computing high-breakdown regression estimators in parallel on graphics processing units (GPU).We show that sorting the residuals is not necessary, and it can be substituted by calculating the median. We present and compare various methods to calculate the median and order statistics on GPUs. We introduce an alternative method based on the optimization of a convex function, and showits numerical superiority when calculating the order statistics of very large arrays on GPUs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most intriguing elements of the study of celebrity is the complex relationship between the renowned individuals that have celebrity status and the populace. In past work, I have identified how celebrities “embody” audiences producing a kind of audience-subjectivity that is both collective and individual. If our media systems are producing slightly different collective configurations and quite different ways in which individuals exhibit and share, this relationship between the individual and the collective so foregrounded by celebrity culture may be differently constituted. This presentation will look at how the celebration of the self is played out now across culture in variations of the social and para-social structures of celebrity culture, in professional settings and what would be seen as forms of online leisure and recreational activities. In one sense, this is the spectre of celebrity that has now been virtualised by individuals and their forms of public display. In another sense, we now have a very diverse range and spectrum of public personalities which demands a more extensive analysis of the constitution of public persona, where the embodiment of collectives and the articulation of identity forms for different purposes and objectives produce via a series of micro-publics a substantially different public sphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We found an interesting relation between convex optimization and sorting problem. We present a parallel algorithm to compute multiple order statistics of the data by minimizing a number of related convex functions. The computed order statistics serve as splitters that group the data into buckets suitable for parallel bitonic sorting. This led us to a parallel bucket sort algorithm, which we implemented for many-core architecture of graphics processing units (GPUs). The proposed sorting method is competitive to the state-of-the-art GPU sorting algorithms and is superior to most of them for long sorting keys.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

 Computational efficiency and hence the scale of agent-based swarm simulations is bound by the nearest neighbour computation for each agent. This article proposes the use of GPU texture memory to implement lookup tables for a spatial partitioning based k-Nearest Neighbours algorithm. These improvements allow simulation of swarms of 220 agents at higher rates than the current best alternative algorithms. This approach is incorporated into an existing framework for simulating steering behaviours allowing for a complete implementation of massive agent swarm simulations, with per agent behaviour preferences, on a Graphics Processing Unit. These simulations have enabled an investigation of the emergent dynamics that occur when massive swarms interact with a choke point in their environment. Various modes of sustained dynamics with temporal and spatial coherence are identified when a critical mass of agents is simulated and some elementary properties are presented. The algorithms presented in this article enable researchers and content designers in games and movies to implement truly massive agent swarms in real time and thus provide a basis for further identification and analysis of the emergent dynamics in these swarms. This will improve not only the scale of swarms used in commercial games and movies but will also improve the reliability of swarm behaviour with respect to content design goals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The intersection of network function virtualisation (NFV) technologies and big data has the potential of revolutionising today's telecommunication networks from deployment to operations resulting in significant reductions in capital expenditure (CAPEX) and operational expenditure, as well as cloud vendor and additional revenue growths for the operators. One of the contributions of this article is the comparisons of the requirements for big data and network virtualisation and the formulation of the key performance indicators for the distributed big data NFVs at the operator's infrastructures. Big data and virtualisation are highly interdependent and their intersections and dependencies are analysed and the potential optimisation gains resulted from open interfaces between big data and carrier networks NFV functional blocks for an adaptive environment are then discussed. Another contribution of this article is a comprehensive discussion on open interface recommendations which enables global collaborative and scalable virtualised big data applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a parallel algorithm for calculating determinants of matrices in arbitrary precision arithmetic on computer clusters. This algorithm limits data movements between the nodes and computes not only the determinant but also all the minors corresponding to a particular row or column at a little extra cost, and also the determinants and minors of all the leading principal submatrices at no extra cost. We implemented the algorithm in arbitrary precision arithmetic, suitable for very ill conditioned matrices, and empirically estimated the loss of precision. In our scenario the cost of computation is bigger than that of data movement. The algorithm was applied to studies of Riemann’s zeta function.