193 resultados para HPC


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since its introduction in 1993, the Message Passing Interface (MPI) has become a de facto standard for writing High Performance Computing (HPC) applications on clusters and Massively Parallel Processors (MPPs). The recent emergence of multi-core processor systems presents a new challenge for established parallel programming paradigms, including those based on MPI. This paper presents a new Java messaging system called MPJ Express. Using this system, we exploit multiple levels of parallelism - messaging and threading - to improve application performance on multi-core processors. We refer to our approach as nested parallelism. This MPI-like Java library can support nested parallelism by using Java or Java OpenMP (JOMP) threads within an MPJ Express process. Practicality of this approach is assessed by porting to Java a massively parallel structure formation code from Cosmology called Gadget-2. We introduce nested parallelism in the Java version of the simulation code and report good speed-ups. To the best of our knowledge it is the first time this kind of hybrid parallelism is demonstrated in a high performance Java application. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of virtualization in high-performance computing (HPC) has been suggested as a means to provide tailored services and added functionality that many users expect from full-featured Linux cluster environments. The use of virtual machines in HPC can offer several benefits, but maintaining performance is a crucial factor. In some instances the performance criteria are placed above the isolation properties. This selective relaxation of isolation for performance is an important characteristic when considering resilience for HPC environments that employ virtualization. In this paper we consider some of the factors associated with balancing performance and isolation in configurations that employ virtual machines. In this context, we propose a classification of errors based on the concept of “error zones”, as well as a detailed analysis of the trade-offs between resilience and performance based on the level of isolation provided by virtualization solutions. Finally, a set of experiments are performed using different virtualization solutions to elucidate the discussion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network Theory is a prolific and lively field, especially when it approaches Biology. New concepts from this theory find application in areas where extensive datasets are already available for analysis, without the need to invest money to collect them. The only tools that are necessary to accomplish an analysis are easily accessible: a computing machine and a good algorithm. As these two tools progress, thanks to technology advancement and human efforts, wider and wider datasets can be analysed. The aim of this paper is twofold. Firstly, to provide an overview of one of these concepts, which originates at the meeting point between Network Theory and Statistical Mechanics: the entropy of a network ensemble. This quantity has been described from different angles in the literature. Our approach tries to be a synthesis of the different points of view. The second part of the work is devoted to presenting a parallel algorithm that can evaluate this quantity over an extensive dataset. Eventually, the algorithm will also be used to analyse high-throughput data coming from biology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il presente lavoro di tesi si inserisce nell’ambito della classificazione di dati ad alta dimensionalità, sviluppando un algoritmo basato sul metodo della Discriminant Analysis. Esso classifica i campioni attraverso le variabili prese a coppie formando un network a partire da quelle che hanno una performance sufficientemente elevata. Successivamente, l’algoritmo si avvale di proprietà topologiche dei network (in particolare la ricerca di subnetwork e misure di centralità di singoli nodi) per ottenere varie signature (sottoinsiemi delle variabili iniziali) con performance ottimali di classificazione e caratterizzate da una bassa dimensionalità (dell’ordine di 101, inferiore di almeno un fattore 103 rispetto alle variabili di partenza nei problemi trattati). Per fare ciò, l’algoritmo comprende una parte di definizione del network e un’altra di selezione e riduzione della signature, calcolando ad ogni passaggio la nuova capacità di classificazione operando test di cross-validazione (k-fold o leave- one-out). Considerato l’alto numero di variabili coinvolte nei problemi trattati – dell’ordine di 104 – l’algoritmo è stato necessariamente implementato su High-Performance Computer, con lo sviluppo in parallelo delle parti più onerose del codice C++, nella fattispecie il calcolo vero e proprio del di- scriminante e il sorting finale dei risultati. L’applicazione qui studiata è a dati high-throughput in ambito genetico, riguardanti l’espressione genica a livello cellulare, settore in cui i database frequentemente sono costituiti da un numero elevato di variabili (104 −105) a fronte di un basso numero di campioni (101 −102). In campo medico-clinico, la determinazione di signature a bassa dimensionalità per la discriminazione e classificazione di campioni (e.g. sano/malato, responder/not-responder, ecc.) è un problema di fondamentale importanza, ad esempio per la messa a punto di strategie terapeutiche personalizzate per specifici sottogruppi di pazienti attraverso la realizzazione di kit diagnostici per l’analisi di profili di espressione applicabili su larga scala. L’analisi effettuata in questa tesi su vari tipi di dati reali mostra che il metodo proposto, anche in confronto ad altri metodi esistenti basati o me- no sull’approccio a network, fornisce performance ottime, tenendo conto del fatto che il metodo produce signature con elevate performance di classifica- zione e contemporaneamente mantenendo molto ridotto il numero di variabili utilizzate per questo scopo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In questo lavoro di tesi sono state evidenziate alcune problematiche relative alle macchine exascale (sistemi che sviluppano un exaflops di Potenza di calcolo) e all'evoluzione dei software che saranno eseguiti su questi sistemi, prendendo in esame principalmente la necessità del loro sviluppo, in quanto indispensabili per lo studio di problemi scientifici e tecnologici di più grandi dimensioni, con particolare attenzione alla Material Science, che è uno dei campi che ha avuto maggiori sviluppi grazie all'utilizzo di supercomputer, ed ad uno dei codici HPC più utilizzati in questo contesto: Quantum ESPRESSO. Dal punto di vista del software sono state presentate le prime misure di efficienza energetica su architettura ibrida grazie al prototipo di cluster EURORA sul software Quantum ESPRESSO. Queste misure sono le prime ad essere state pubblicate nel contesto software per la Material Science e serviranno come baseline per future ottimizzazioni basate sull'efficienza energetica. Nelle macchine exascale infatti uno dei requisiti per l'accesso sarà la capacità di essere energeticamente efficiente, così come oggi è un requisito la scalabilità del codice. Un altro aspetto molto importante, riguardante le macchine exascale, è la riduzione del numero di comunicazioni che riduce il costo energetico dell'algoritmo parallelo, poiché in questi nuovi sistemi costerà di più, da un punto di vista energetico, spostare i dati che calcolarli. Per tale motivo in questo lavoro sono state esposte una strategia, e la relativa implementazione, per aumentare la località dei dati in uno degli algoritmi più dispendiosi, dal punto di vista computazionale, in Quantum ESPRESSO: Fast Fourier Transform (FFT). Per portare i software attuali su una macchina exascale bisogna iniziare a testare la robustezza di tali software e i loro workflow su test case che stressino al massimo le macchine attualmente a disposizione. In questa tesi per testare il flusso di lavoro di Quantum ESPRESSO e WanT, un software per calcolo di trasporto, è stato caratterizzato un sistema scientificamente rilevante costituito da un cristallo di PDI - FCN2 che viene utilizzato per la costruzione di transistor organici OFET. Infine è stato simulato un dispositivo ideale costituito da due elettrodi in oro con al centro una singola molecola organica.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High Performance Computing e una tecnologia usata dai cluster computazionali per creare sistemi di elaborazione che sono in grado di fornire servizi molto piu potenti rispetto ai computer tradizionali. Di conseguenza la tecnologia HPC e diventata un fattore determinante nella competizione industriale e nella ricerca. I sistemi HPC continuano a crescere in termini di nodi e core. Le previsioni indicano che il numero dei nodi arrivera a un milione a breve. Questo tipo di architettura presenta anche dei costi molto alti in termini del consumo delle risorse, che diventano insostenibili per il mercato industriale. Un scheduler centralizzato non e in grado di gestire un numero di risorse cosi alto, mantenendo un tempo di risposta ragionevole. In questa tesi viene presentato un modello di scheduling distribuito che si basa sulla programmazione a vincoli e che modella il problema dello scheduling grazie a una serie di vincoli temporali e vincoli sulle risorse che devono essere soddisfatti. Lo scheduler cerca di ottimizzare le performance delle risorse e tende ad avvicinarsi a un profilo di consumo desiderato, considerato ottimale. Vengono analizzati vari modelli diversi e ognuno di questi viene testato in vari ambienti.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El modelo de investigación y pronóstico climático (WRF) es un sistema completamente funcional de modelado que permite realizar investigación atmosférica y predicción meteorológica. WRF fue desarrollado con énfasis en la eficiencia, portabilidad, facilidad de mantenimiento, escalabilidad y productividad, lo que ha permitido que sea implementado con éxito en una amplia variedad de equipos HPC. Por esta razón, el tamaño de los problemas a los que WRF da soporte ha incrementado, por lo que el entendimiento de la dependencia del WRF con los diversos elementos de clúster, como la CPU, interconexiones y librerías, son cruciales para permitir predicciones eficientes y de alta productividad. En este contexto, el presente manuscrito estudia la escalabilidad de WRF en un equipo HPC, tomando en consideración tres parámetros: número de CPUs y nodos, comunicaciones y librerías. Para esto, dos benchmarks son llevados a cabo sobre un clúster de alto rendimiento dotado de una red GigaEthernet, los cuales permiten establecer la relación entre escalabilidad y los tres parámetros estudiados, y particularmente demuestran la sensibilidad del WRF a la comunicación inter-nodo. Dicho factor es esencial para mantener la escalabilidad y el aumento de la productividad al añadir nodos en el clúster.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

After a decade evolving in the High Performance Computing arena, GPU-equipped supercomputers have con- quered the top500 and green500 lists, providing us unprecedented levels of computational power and memory bandwidth. This year, major vendors have introduced new accelerators based on 3D memory, like Xeon Phi Knights Landing by Intel and Pascal architecture by Nvidia. This paper reviews hardware features of those new HPC accelerators and unveils potential performance for scientific applications, with an emphasis on Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) used by commercial products according to roadmaps already announced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Texture based techniques for visualisation of unsteady vector fields have been applied for the visualisation of a Finite volume model for variably saturated groundwater flow through porous media. This model has been developed by staff in the School of Mathematical Sciences QUT for the study of salt water intrusion into coastal aquifers. This presentation discusses the implementation and effectiveness of the IBFV algorithm in the context of visualisation of the groundwater simulation outputs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) is known for several flagship eResearch centres. It also has a number of mature, centralised research support services that address a several of areas of eResearch. The university has openly stated its aspiration to be an institution with a strongly embedded eResearch capability and to this end it has expressed the desire to establish a university-wide eResearch support service. However, articulating this desire is much easier than realising it. During 2008 QUT undertook a major review into eResearch that made recommendations on the development of university-wide eResearch support service and the building of eResearch capabilities and capacity throughout the university. The results of this review were reported last year at this conference. In 2009, QUT is progressing a second, follow-on project – Building eResearch Support Capability and Capacity. It has been designed to build upon existing strengths in HPC, repositories, data management, and the delivery of integrated skills for eresearch. The purpose of this presentation is to give an update on QUT’s journey, one year on from its first major report into eResearch. It will outline how the university is approaching this challenge, the current work being carried out and the strategies being employed. We will also discuss the lessons learned.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) is a large multidisciplinary university located in Brisbane, Queensland, Australia. QUT is increasing its research focus and is developing its research support services. It has adopted a model of collaboration between the Library, High Performance Computing and Research Support (HPC) and more broadly with Information Technology Services (ITS). Research support services provided by the Library include the provision of information resources and discovery services, bibliographic management software, assistance with publishing (publishing strategies, identifying high impact journals, dealing with publishers and the peer review process), citation analysis and calculating authors’ H Index. Research data management services are being developed by the Library and HPC working in collaboration. The HPC group within ITS supports research computing infrastructure, research development and engagement activities, researcher consultation, high speed computation and data storage systems , 2D/ 3D (immersive) visualisation tools, parallelisation and optimization of research codes, statistics/ data modeling training and support (both qualitative and quantitative) and support for the university’s central Access Grid collaboration facility. Development and engagement activities include participation in research grants and papers, student supervision and internships and the sponsorship, incubation and adoption of new computing technologies for research. ITS also provides other services that support research including ICT training, research infrastructure (networking, data storage, federated access and authorization, virtualization) and corporate systems for research administration. Seminars and workshops are offered to increase awareness and uptake of new and existing services. A series of online surveys on eResearch practices and skills and a number of focus groups was conducted to better inform the development of research support services. Progress towards the provision of research support is described within the context organizational frameworks; resourcing; infrastructure; integration; collaboration; change management; engagement; awareness and skills; new services; and leadership. Challenges to be addressed include the need to redeploy existing operational resources toward new research support services, supporting a rapidly growing research profile across the university, the growing need for the use and support of IT in research programs, finding capacity to address the diverse research support needs across the disciplines, operationalising new research support services following their implementation in project mode, embedding new specialist staff roles, cross-skilling Liaison Librarians, and ensuring continued collaboration between stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) is a multidisciplinary university in Brisbane, Queensland, Australia, and has 40,000 students and 1,700 researchers. Notable eResearch infrastructure includes the QUT ePrints repository, Microsoft QUT Research Centre, the OAK (Open Access to Knowledge) Law Project, Cambia and leading research institutes. ---------- The Australian Government, via the Australian National Data Service (ANDS), is funding institutions to identify and describe their research datasets, to develop and populate data repositories and collaborative infrastructure, and to seed the Australian Research Data Commons. QUT is currently broadening its range of research support services, including those to support the management of research data, in recognition of the value of these datasets as products of the research process, and in order to maximize the potential for reuse. QUT is integrating Library and High Performance Computing (HPC) services to achieve its research support goals. ---------- The Library and HPC released an online survey using Key Survey to 1,700 researchers in September 2009. A comprehensive range of eResearch practices and skills was presented for response, and grouped into areas of scholarly communication and open access publishing, using collaborative technologies, data management, data collection and management, computation and visualization tools. Researchers were asked to rate their skill level on each practice. 254 responses were received over two weeks. Eight focus groups were also held with 35 higher degree research (HDR) students and staff to provide additional qualitative feedback. A similar survey was released to 100 support staff and 73 responses were received.---------- Preliminary results from the researcher survey and focus groups indicate a gap between current eResearch practices, and the potential for researchers to engage in eResearch practices. Researchers are more likely to seek advice from their peers, than from support staff. HDR students are more positive about eResearch practices and are more willing to learn new ways of conducting research. An account of the survey methodology, the results obtained, and proposed strategies to embed eResearch practices and skills across and within the research disciplines will be provided.