947 resultados para grid computing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present work, the effects of spatial constraints on the efficiency of task execution in systems underlain by geographical complex networks are investigated, where the probability of connection decreases with the distance between the nodes. The investigation considers several configurations of the parameters defining the network connectivity, and the Barabasi-Albert network model is also considered for comparisons. The results show that the effect of connectivity is significant only for shorter tasks, the locality of connection simplied by the spatial constraints reduces efficiency, and the addition of edges can improve the efficiency of the execution, although with increasing locality of the connections the improvement is small.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In e-Science experiments, it is vital to record the experimental process for later use such as in interpreting results, verifying that the correct process took place or tracing where data came from. The process that led to some data is called the provenance of that data, and a provenance architecture is the software architecture for a system that will provide the necessary functionality to record, store and use process documentation. However, there has been little principled analysis of what is actually required of a provenance architecture, so it is impossible to determine the functionality they would ideally support. In this paper, we present use cases for a provenance architecture from current experiments in biology, chemistry, physics and computer science, and analyse the use cases to determine the technical requirements of a generic, technology and application-independent architecture. We propose an architecture that meets these requirements and evaluate a preliminary implementation by attempting to realise two of the use cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste início de década, observa-se a transformação das áreas de Computação em Grade (Grid Computing) e Computação Móvel (Mobile Computing) de uma conotação de interesse emergente para outra caracterizada por uma demanda real e qualificada de produtos, serviços e pesquisas. Esta tese tem como pressuposto a identificação de que os problemas hoje abordados isoladamente nas pesquisas relativas às computações em grade, consciente do contexto e móvel, estão presentes quando da disponibilização de uma infra-estrutura de software para o cenário da Computação Pervasiva. Neste sentido, como aspecto central da sua contribuição, propõe uma solução integrada para suporte à Computação Pervasiva, implementada na forma de um middleware que visa criar e gerenciar um ambiente pervasivo, bem como promover a execução, sob este ambiente, das aplicações que expressam a semântica siga-me. Estas aplicações são, por natureza, distribuídas, móveis e adaptativas ao contexto em que seu processamento ocorre, estando disponíveis a partir de qualquer lugar, todo o tempo. O middleware proposto, denominado EXEHDA (Execution Environment for Highly Distributed Applications), é adaptativo ao contexto e baseado em serviços, sendo chamado de ISAMpe o ambiente por este disponibilizado. O EXEHDA faz parte dos esforços de pesquisa do Projeto ISAM (Infra-Estrutura de Suporte às Aplicações Móveis Distribuídas), em andamento na UFRGS. Para atender a elevada flutuação na disponibilidade dos recursos, inerente à Computação Pervasiva, o EXEHDA é estruturado em um núcleo mínimo e em serviços carregados sob demanda. Os principais serviços fornecidos estão organizados em subsistemas que gerenciam: (a) a execução distribuída; (b) a comunicação; (c) o reconhecimento do contexto; (d) a adaptação; (e) o acesso pervasivo aos recursos e serviços; (f) a descoberta e (g) o gerenciamento de recursos No EXEHDA, as condições de contexto são pró-ativamente monitoradas e o suporte à execução deve permitir que tanto a aplicação como ele próprio utilizem essas informações na gerência da adaptação de seus aspectos funcionais e não-funcionais. O mecanismo de adaptação proposto para o EXEHDA emprega uma estratégia colaborativa entre aplicação e ambiente de execução, através da qual é facultado ao programador individualizar políticas de adaptação para reger o comportamento de cada um dos componentes que constituem o software da aplicação. Aplicações tanto do domínio da Computação em Grade, quanto da Computação Pervasiva podem ser programadas e executadas sob gerenciamento do middleware proposto.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computational grids allow users to share resources of distributed machines, even if those machines belong to different corporations. The scheduling of applications must be performed aiming at performance goals, and focusing on choose which processes can have access to specif resources, and which resources. In this article we discuss aspects of scheduling of application in grid computing environment. We also present a tool for scheduling simulation along with test scenarios and results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Das am Südpol gelegene Neutrinoteleskop IceCube detektiert hochenergetische Neutrinos über die schwache Wechselwirkung geladener und neutraler Ströme. Die Analyse basiert auf einem Vergleich mit Monte-Carlo-Simulationen, deren Produktion global koordiniert wird. In Mainz ist es erstmalig gelungen, Simulationen innerhalb der Architektur des Worldwide LHC Computing Grid (WLCG) zu realisieren, was die Möglichkeit eröffnet, Monte-Carlo-Berechnungen auch auf andere deutsche Rechnerfarmen (CEs) mit IceCube-Berechtigung zu verteilen. Atmosphärische Myonen werden mit einer Rate von über 1000 Ereignissen pro Sekunde aufgezeichnet. Eine korrekte Interpretation dieses dominanten Signals, welches um einen Faktor von 10^6 reduziert werden muss um das eigentliche Neutrinosignal zu extrahieren, ist deswegen von großer Bedeutung. Eigene Simulationen mit der Software-Umgebung CORSIKA wurden durchgeführt um die von Energie und Einfallswinkel abhängige Entstehungshöhe atmosphärischer Myonen zu bestimmen. IceCube Myonraten wurden mit Wetterdaten des European Centre for Medium-Range Weather Forcasts (ECMWF) verglichen und Korrelationen zwischen jahreszeitlichen sowie kurzzeitigen Schwankungen der Atmosphärentemperatur und Myonraten konnten nachgewiesen werden. Zudem wurde eine Suche nach periodischen Effekten in der Atmosphäre, verursacht durch z.B. meteorologische Schwerewellen, mit Hilfe einer Fourieranalyse anhand der IceCube-Daten durchgeführt. Bislang konnte kein signifikanter Nachweis zur Existenz von Schwerewellen am Südpol erbracht werden.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nella fisica delle particelle, onde poter effettuare analisi dati, è necessario disporre di una grande capacità di calcolo e di storage. LHC Computing Grid è una infrastruttura di calcolo su scala globale e al tempo stesso un insieme di servizi, sviluppati da una grande comunità di fisici e informatici, distribuita in centri di calcolo sparsi in tutto il mondo. Questa infrastruttura ha dimostrato il suo valore per quanto riguarda l'analisi dei dati raccolti durante il Run-1 di LHC, svolgendo un ruolo fondamentale nella scoperta del bosone di Higgs. Oggi il Cloud computing sta emergendo come un nuovo paradigma di calcolo per accedere a grandi quantità di risorse condivise da numerose comunità scientifiche. Date le specifiche tecniche necessarie per il Run-2 (e successivi) di LHC, la comunità scientifica è interessata a contribuire allo sviluppo di tecnologie Cloud e verificare se queste possano fornire un approccio complementare, oppure anche costituire una valida alternativa, alle soluzioni tecnologiche esistenti. Lo scopo di questa tesi è di testare un'infrastruttura Cloud e confrontare le sue prestazioni alla LHC Computing Grid. Il Capitolo 1 contiene un resoconto generale del Modello Standard. Nel Capitolo 2 si descrive l'acceleratore LHC e gli esperimenti che operano a tale acceleratore, con particolare attenzione all’esperimento CMS. Nel Capitolo 3 viene trattato il Computing nella fisica delle alte energie e vengono esaminati i paradigmi Grid e Cloud. Il Capitolo 4, ultimo del presente elaborato, riporta i risultati del mio lavoro inerente l'analisi comparata delle prestazioni di Grid e Cloud.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L'obiettivo di questa tesi è studiare la fattibilità dello studio della produzione associata ttH del bosone di Higgs con due quark top nell'esperimento CMS, e valutare le funzionalità e le caratteristiche della prossima generazione di toolkit per l'analisi distribuita a CMS (CRAB versione 3) per effettuare tale analisi. Nel settore della fisica del quark top, la produzione ttH è particolarmente interessante, soprattutto perchè rappresenta l'unica opportunità di studiare direttamente il vertice t-H senza dover fare assunzioni riguardanti possibili contributi dalla fisica oltre il Modello Standard. La preparazione per questa analisi è cruciale in questo momento, prima dell'inizio del Run-2 dell'LHC nel 2015. Per essere preparati a tale studio, le implicazioni tecniche di effettuare un'analisi completa in un ambito di calcolo distribuito come la Grid non dovrebbero essere sottovalutate. Per questo motivo, vengono presentati e discussi un'analisi dello stesso strumento CRAB3 (disponibile adesso in versione di pre-produzione) e un confronto diretto di prestazioni con CRAB2. Saranno raccolti e documentati inoltre suggerimenti e consigli per un team di analisi che sarà eventualmente coinvolto in questo studio. Nel Capitolo 1 è introdotta la fisica delle alte energie a LHC nell'esperimento CMS. Il Capitolo 2 discute il modello di calcolo di CMS e il sistema di analisi distribuita della Grid. Nel Capitolo 3 viene brevemente presentata la fisica del quark top e del bosone di Higgs. Il Capitolo 4 è dedicato alla preparazione dell'analisi dal punto di vista degli strumenti della Grid (CRAB3 vs CRAB2). Nel capitolo 5 è presentato e discusso uno studio di fattibilità per un'analisi del canale ttH in termini di efficienza di selezione.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

How do developers and designers of a new technology make sense of intended users? The critical groundwork for user-centred technology development begins not by involving actual users' exposure to the technological artefact but much earlier, with designers' and developers' vision of future users. Thus, anticipating intended users is critical to technology uptake. We conceptualise the anticipation of intended users as a form of prospective sensemaking in technology development. Employing a narrative analytical approach and drawing on four key communities in the development of Grid computing, we reconstruct how each community anticipated the intended Grid user. Based on our findings, we conceptualise user anticipation in Terms of two key dimensions, namely the intended possibility to inscribe user needs into the technological artefact as well as the intended scope of the application domain. In turn, these dimensions allow us to develop an initial typology of intended user concepts that in turn might provide a key building block towards a generic typology of intended users.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.