809 resultados para Parallel Work Experience, Practise, Architecture


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although it plays a key role in the theory of stratified turbulence, the concept of available potential energy (APE) dissipation has remained until now a rather mysterious quantity, owing to the lack of rigorous result about its irreversible character or energy conversion type. Here, we show by using rigorous energetics considerations rooted in the analysis of the Navier-Stokes for a fully compressible fluid with a nonlinear equation of state that the APE dissipation is an irreversible energy conversion that dissipates kinetic energy into internal energy, exactly as viscous dissipation. These results are established by showing that APE dissipation contributes to the irreversible production of entropy, and by showing that it is a part of the work of expansion/contraction. Our results provide a new interpretation of the entropy budget, that leads to a new exact definition of turbulent effective diffusivity, which generalizes the Osborn-Cox model, as well as a rigorous decomposition of the work of expansion/contraction into reversible and irreversible components. In the context of turbulent mixing associated with parallel shear flow instability, our results suggests that there is no irreversible transfer of horizontal momentum into vertical momentum, as seems to be required when compressible effects are neglected, with potential consequences for the parameterisations of momentum dissipation in the coarse-grained Navier-Stokes equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring resources is an important aspect of the overall efficient usage and control of any distributed system. In this paper, we describe a generic open-source resource monitoring architecture that has been specifically designed for the Grid. The paper consists of three main sections. In the first section, we outline our motivation and briefly detail similar work in the area. In the second section, we describe the general monitoring architecture and its components. In the final section of the paper, we summarise the experiences so far and outline our future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing interest in integrating Java-based, and in particular Jini systems, with the emerging Grid infrastructures. In this paper we explore various ways of integrating the key components of each architecture, their directory and information management services. In the first part of the paper we sketch out the Jini and Grid architectures and their services. We then review the components and services that Jini provides and compare these with those of the Grid. In the second part of the paper we critically explore four ways that Jini and the Grid could interact, here in particular we look at possible scenarios that can provide a seamless interface to a Jini environment for Grid clients and how to use Jini services from a Grid environment. In the final part of the paper we summarise our findings and report on future work being undertaken to integrate Jini and the Grid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of versatile bioactive surfaces able to emulate in vivo conditions is of enormous importance to the future of cell and tissue therapy. Tuning cell behaviour on two-dimensional surfaces so that the cells perform as if they were in a natural three-dimensional tissue represents a significant challenge, but one that must be met if the early promise of cell and tissue therapy is to be fully realised. Due to the inherent complexities involved in the manufacture of biomimetic three-dimensional substrates, the scaling up of engineered tissue-based therapies may be simpler if based upon proven two-dimensional culture systems. In this work, we developed new coating materials composed of the self-assembling peptide amphiphiles (PAs) C16G3RGD (RGD) and C16G3RGDS (RGDS) shown to control cell adhesion and tissue architecture while avoiding the use of serum. When mixed with the C16ETTES diluent PA at 13 : 87 (mol mol-1) ratio at 1.25 times 10-3 M, the bioactive {PAs} were shown to support optimal adhesion, maximal proliferation, and prolonged viability of human corneal stromal fibroblasts ({hCSFs)}, while improving the cell phenotype. These {PAs} also provided stable adhesive coatings on highly-hydrophobic surfaces composed of striated polytetrafluoroethylene ({PTFE)}, significantly enhancing proliferation of aligned cells and increasing the complexity of the produced tissue. The thickness and structure of this highly-organised tissue were similar to those observed in vivo, comprising aligned newly-deposited extracellular matrix. As such, the developed coatings can constitute a versatile biomaterial for applications in cell biology, tissue engineering, and regenerative medicine requiring serum-free conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Garfield produces a critique of neo-minimalist art practice by demonstrating how the artist Melanie Jackson’s Some things you are not allowed to send around the world (2003 and 2006) and the experimental film-maker Vivienne Dick’s Liberty’s booty (1980) – neither of which can be said to be about feeling ‘at home’ in the world, be it as a resident or as a nomad – examine global humanity through multi-positionality, excess and contingency, and thereby begin to articulate a new cosmopolitan relationship with the local – or, rather, with many different localities – in one and the same maximalist sweep of the work. ‘Maximalism’ in Garfield’s coinage signifies an excessive overloading (through editing, collage, and the sheer density of the range of the material) that enables the viewer to insert themselves into the narrative of the work. In the art of both Jackson and Dick Garfield detects a refusal to know or to judge the world; instead, there is an attempt to incorporate the complexities of its full range into the singular vision of the work, challenging the viewer to identify what is at stake.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – This paper extends the increasing debates about the role of international experience through mechanisms other than standard expatriation packages, in particular through the use of short-term assignments. It explores the different forms of short-term assignments (project work, commuter assignments, virtual international working and development assignments) and the different sets of positive and negative implications these can have for the company and the individuals concerned. The integration-differentiation debate is reflected here as elsewhere in IHRM, with the company moving towards greater centralization and control of its use of these assignments. Design/methodology/approach – Since the research is exploratory, we adopted a qualitative approach to get a more in-depth understanding on the realities the corporations and the assignees are facing. The study was implemented through a single case study setting in which the data were collected by interviewing (n=20) line managers, human resource management (HRM) staff and assignees themselves. In addition corporate documentation and other materials were reviewed. Findings – The present case study provides evidence about the characteristics of short-term assignments as well as the on the management of such assignments. The paper identifies various benefits and challenges involved in the use of short-term assignments both from the perspectives of the company and assignees. Furthermore, the findings support the view that a recent increase in the popularity of short-term assignments has not been matched by the development of HRM policies for such assignments. Research limitations/implications – As a single case study, limitations in the generalizability of the findings should be kept in mind. More large-scale research evidence is needed around different forms of international assignments beyond standard expatriation in order to fully capture the realities faced by international HRM specialists Practical implications – The paper identifies many challenges but also benefits of using short-term assignments. The paper reports in-depth findings on HR development needs that organizations face when expanding the use of such assignments. Social implications – The paper identifies many challenges but also benefits of using short-term assignments. The paper reports in-depth findings on HR development needs that organizations face when expanding the use of such assignments. Originality/value – Empirical research on short-term assignments is still very limited. In that way the paper provides much needed in-depth evidence on why such assignments are used, what challenges are involved in the use of such assignments and what kinds of HR-development needs are involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate electron acceleration due to shear Alfven waves in a collissionless plasma for plasma parameters typical of 4–5RE radial distance from the Earth along auroral field lines. Recent observational work has motivated this study, which explores the plasma regime where the thermal velocity of the electrons is similar to the Alfven speed of the plasma, encouraging Landau resonance for electrons in the wave fields. We use a self-consistent kinetic simulation model to follow the evolution of the electrons as they interact with a short-duration wave pulse, which allows us to determine the parallel electric field of the shear Alfven wave due to both electron inertia and electron pressure effects. The simulation demonstrates that electrons can be accelerated to keV energies in a modest amplitude sub-second period wave. We compare the parallel electric field obtained from the simulation with those provided by fluid approximations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mergers of Higher Education Institutions (HEIs) are organisational processes requiring tremendous amount of resources, in terms of time, work, and money. A number of mergers have been seen on previous years and more are to come. Several studies on mergers have been conducted, revealing some crucial factors that affect the success of mergers. Based on literature review on these studies, factors are: the initiator of merger, a reason for merger, geographical distance of merging institutions, organisational culture, the extend of overlapping course portfolio, and Quality Assurance Systems (QASs). Usually these kind of factors are not considered on mergers, but focus is on financial matters. In this paper, a framework (HMEF) for evaluating merging of HEIs is introduced. HMEF is based on Enterprise Architecture (EA), focusing on factors found to be affecting the success of mergers. By using HMEF, HEIs can focus on matters that crucial for merging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One and One and One is a collaborative project organized by Tim Renshaw with Outside Architecture. The work in the exhibition explored the incomplete and addresses it as an active process that opens up architectural and spatial structure to new forms of experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.