66 resultados para supercomputing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe a lightweight Web portal developed for running computational jobs on a IBM JS21 Bladecenter cluster, ThamesBlue, for inferring and analyzing evolutionary histories. We first discuss the need for leveraging HPC as a enabler for molecular phylogenetics research. We go on to describe how the portal is designed to interface with existing open-source software that is typical of a HPC resource configuration, and how by design this portal is generic enough to be portable to other similarly configured compute clusters, and for other applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Access All was performance produced following a three-month mentorship in web-based performance that I was commissioned to conduct for the performance company Igneous. This live, triple-site performance event for three performers in three remote venues was specifically designed for presentation at Access Grid Nodes - conference rooms located around the globe equipped with a high end, open source computer teleconferencing technology that allowed multiple nodes to cross-connect with each other. Whilst each room was setup somewhat differently they all deployed the same basic infrastructre of multiple projectors, cameras, and sound as well as a reconfigurable floorspace. At that time these relatively formal setups imposed a clear series of limitations in terms of software capabilities and basic infrastructure and so there was much interest in understanding how far its capabilities might be pushed.----- Numerous performance experiments were undertaken between three Access Grid nodes in QUT Brisbane, VISLAB Sydney and Manchester Supercomputing Centre, England, culminating in the public performance staged simultaneously between the sites with local audiences at each venue and others online. Access All was devised in collaboration with interdisciplinary performance company Bonemap, Kelli Dipple (Interarts curator, Tate Modern London) and Mike Stubbs British curator and Director of FACT (Liverpool).----- This period of research and development was instigated and shaped by a public lecture I had earlier delivered in Sydney for the ‘Global Access Grid Network, Super Computing Global Conference’ entitled 'Performance Practice across Electronic Networks'. The findings of this work went on to inform numerous future networked and performative works produced from 2002 onwards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Water-filled portable road safety barriers are a common fixture in road works, however their use of water can be problematic, both in terms of the quantity of water used and the transportation of the water to the installation site. This project aims to develop a new design of portable road safety barrier, which will make novel use of composite and foam materials in order to reduce the barrier’s reliance on water in order to control errant vehicles. The project makes use of finite element (FE) techniques in order to simulate and evaluate design concepts. FE methods and models that have previously been tested and validated will be used in combination in order to provide the most accurate numerical simulations available to drive the project forward. LS-DYNA code is as highly dynamic, non-linear numerical solver which is commonly used in the automotive and road safety industries. Several complex materials and physical interactions are to be simulated throughout the course of the project including aluminium foams, composite laminates and water within the barrier during standardised impact tests. Techniques to be used include FE, smoothed particle hydrodynamics (SPH) and weighted multi-parameter optimisation techniques. A detailed optimisation of several design parameters with specific design goals will be performed with LS-DYNA and LS-OPT, which will require a large number of high accuracy simulations and advanced visualisation techniques. Supercomputing will play a central role in the project, enabling the numerous medium element count simulations necessary in order to determine the optimal design parameters of the barrier to be performed. Supercomputing will also allow the development of useful methods of visualisation results and the production of highly detailed simulations for end-product validation purposes. Efforts thus far have been towards integrating various numerical methods (including FEM, SPH and advanced materials models) together in an efficient and accurate manner. Various designs of joining mechanisms have been developed and are currently being developed into FE models and simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper you will be introduced to a number of principles which can be used to inform good teaching practice and rigorous curriculum design. Principles relate to: * Application of a common sequence of events for how learners learn; * Accommodating different learning styles; * Adopting a purposeful approach to teaching and learning; * Using assessment as a central driving force in the curriculum and as an organising structure leading to coherence of teaching and learning approach; and * The increasing emphasis that is being placed on the development of generic graduate competencies over and above discipline content knowledge. The principles are particularly significant in relation to adult learning. The paper will use three specific applications as illustrations to help you to learn how these principles can be applied. The illustrations are taken from a second year subject in supercomputing that uses scientific case studies. The subject has been developed (with support from Silicon Graphics Inc. and Intel) to be taught entirely via the Internet.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cells are the fundamental building block of plant based food materials and many of the food processing born structural changes can fundamentally be derived as a function of the deformations of the cellular structure. In food dehydration the bulk level changes in porosity, density and shrinkage can be better explained using cellular level deformations initiated by the moisture removal from the cellular fluid. A novel approach is used in this research to model the cell fluid with Smoothed Particle Hydrodynamics (SPH) and cell walls with Discrete Element Methods (DEM), that are fundamentally known to be robust in treating complex fluid and solid mechanics. High Performance Computing (HPC) is used for the computations due to its computing advantages. Comparing with the deficiencies of the state of the art drying models, the current model is found to be robust in replicating drying mechanics of plant based food materials in microscale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Loads that miss in L1 or L2 caches and waiting for their data at the head of the ROB cause significant slow down in the form of commit stalls. We identify that most of these commit stalls are caused by a small set of loads, referred to as LIMCOS (Loads Incurring Majority of COmmit Stalls). We propose simple history-based classifiers that track commit stalls suffered by loads to help us identify this small set of loads. We study an application of these classifiers to prefetching. The classifiers are used to train the prefetcher to focus on the misses suffered by LIMCOS. This, referred to as focused prefetching, results in a 9.8% gain in IPC over naive GHB based delta correlation prefetcher along with a 20.3% reduction in memory traffic for a set of 17 memory-intensive SPEC2000 benchmarks. Another important impact of focused prefetching is a 61% improvement in the accuracy of prefetches. We demonstrate that the proposed classification criterion performs better than other existing criteria like criticality and delinquent loads. Also we show that the criterion of focusing on commit stalls is robust enough across cache levels and can be applied to any prefetcher without any modifications to the prefetcher.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Under the project `Seasonal Prediction of the Indian Monsoon' (SPIM), the prediction of Indian summer monsoon rainfall by five atmospheric general circulation models (AGCMs) during 1985-2004 was assessed. The project was a collaborative effort of the coordinators and scientists from the different modelling groups across the country. All the runs were made at the Centre for Development of Advanced Computing (CDAC) at Bangalore on the PARAM Padma supercomputing system. Two sets of simulations were made for this purpose. In the first set, the AGCMs were forced by the observed sea surface temperature (SST) for May-September during 1985-2004. In the second set, runs were made for 1987, 1988, 1994, 1997 and 2002 forced by SST which was obtained by assuming that the April anomalies persist during May-September. The results of the first set of runs show, as expected from earlier studies, that none of the models were able to simulate the correct sign of the anomaly of the Indian summer monsoon rainfall for all the years. However, among the five models, one simulated the correct sign in the largest number of years and the second model showed maximum skill in the simulation of the extremes (i.e. droughts or excess rainfall years). The first set of runs showed some common bias which could arise either from an excessive sensitivity of the models to El Nino Southern Oscillation (ENSO) or an inability of the models to simulate the link of the Indian monsoon rainfall to Equatorial Indian Ocean Oscillation (EQUINOO), or both. Analysis of the second set of runs showed that with a weaker ENSO forcing, some models could simulate the link with EQUINOO, suggesting that the errors in the monsoon simulations with observed SST by these models could be attributed to unrealistically high sensitivity to ENSO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a cache coherence protocol for multistage interconnection network (MIN)-based multiprocessors with two distinct private caches: private-blocks caches (PCache) containing blocks private to a process and shared-blocks caches (SCache) containing data accessible by all processes. The architecture is extended by a coherence control bus connecting all shared-block cache controllers. Timing problems due to variable transit delays through the MIN are dealt with by introducing Transient states in the proposed cache coherence protocol. The impact of the coherence protocol on system performance is evaluated through a performance study of three phases. Assuming homogeneity of all nodes, a single-node queuing model (phase 3) is developed to analyze system performance. This model is solved for processor and coherence bus utilizations using the mean value analysis (MVA) technique with shared-blocks steady state probabilities (phase 1) and communication delays (phase 2) as input parameters. The performance of our system is compared to that of a system with an equivalent-sized unified cache and with a multiprocessor implementing a directory-based coherence protocol. System performance measures are verified through simulation.