318 resultados para BLACS MPI


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The acquisition of a Myocardial Perfusion image (MPI) is of great importance for the diagnosis of the coronary artery disease, since it allows to evaluate which areas of the heart aren’t being properly perfused, in rest and stress situations. This exam is greatly influenced by photon attenuation which creates image artifacts and affects quantification. The acquisition of a Computerized Tomography (CT) image makes it possible to get an atomic images which can be used to perform high-quality attenuation corrections of the radiopharmaceutical distribution, in the MPI image. Studies show that by using hybrid imaging to perform diagnosis of the coronary artery disease, there is an increase on the specificity when evaluating the perfusion of the right coronary artery (RCA). Using an iterative algorithm with a resolution recovery software for the reconstruction, which balances the image quality, the administered activity and the scanning time, we aim to evaluate the influence of attenuation correction on the MPI image and the outcome in perfusion quantification and imaging quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This note describes ParallelKnoppix, a bootable CD that allows creation of a Linux cluster in very little time. An experienced user can create a cluster ready to execute MPI programs in less than 10 minutes. The computers used may be heterogeneous machines, of the IA-32 architecture. When the cluster is shut down, all machines except one are in their original state, and the last can be returned to its original state by deleting a directory. The system thus provides a means of using non-dedicated computers to create a cluster. An example session is documented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fault tolerance has become a major issue for computer and software engineers because the occurrence of faults increases the cost of using a parallel computer. RADIC is the fault tolerance architecture for message passing systems which is transparent, decentralized, flexible and scalable. This master thesis presents the methodology used to implement the RADIC architecture over Open MPI, a well-know large-used message passing library. This implementation kept the RADIC architecture characteristics. In order to validate the implementation we have executed a synthetic ping program, besides, to evaluate the implementation performance we have used the NAS Parallel Benchmarks. The results prove that the RADIC architecture performance depends on the communication pattern of the parallel application which is running. Furthermore, our implementation proves that the RADIC architecture could be implemented over an existent message passing library.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En la actualidad, la computación de altas prestaciones está siendo utilizada en multitud de campos científicos donde los distintos problemas estudiados se resuelven mediante aplicaciones paralelas/distribuidas. Estas aplicaciones requieren gran capacidad de cómputo, bien sea por la complejidad de los problemas o por la necesidad de solventar situaciones en tiempo real. Por lo tanto se debe aprovechar los recursos y altas capacidades computacionales de los sistemas paralelos en los que se ejecutan estas aplicaciones con el fin de obtener un buen rendimiento. Sin embargo, lograr este rendimiento en una aplicación ejecutándose en un sistema es una dura tarea que requiere un alto grado de experiencia, especialmente cuando se trata de aplicaciones que presentan un comportamiento dinámico o cuando se usan sistemas heterogéneos. En estos casos actualmente se plantea realizar una mejora de rendimiento automática y dinámica de las aplicaciones como mejor enfoque para el análisis del rendimiento. El presente trabajo de investigación se sitúa dentro de este ámbito de estudio y su objetivo principal es sintonizar dinámicamente mediante MATE (Monitoring, Analysis and Tuning Environment) una aplicación MPI empleada en computación de altas prestaciones que siga un paradigma Master/Worker. Las técnicas de sintonización integradas en MATE han sido desarrolladas a partir del estudio de un modelo de rendimiento que refleja los cuellos de botella propios de aplicaciones situadas bajo un paradigma Master/Worker: balanceo de carga y número de workers. La ejecución de la aplicación elegida bajo el control dinámico de MATE y de la estrategia de sintonización implementada ha permitido observar la adaptación del comportamiento de dicha aplicación a las condiciones actuales del sistema donde se ejecuta, obteniendo así una mejora de su rendimiento.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest projecte es va iniciar amb la finalitat d’extendre i consolidar l’aplicació del Model de Pràcticum Integrador (MPI) a la majoria dels pràcticums de les titulacions de pedagogia, psicopedagogia i educació social de la Facultat de Ciències de l’Educació de la nostra universitat. Aquest Model s’havia iniciat en els darrers anys i ja s’havia comprovat la seva l’eficiència. L’MPI es fonamenta en el convenciment que l’assoliment de les competències professionals és bàsic i que aquestes es poden desenvolupar durant el pràcticum. En aquest nou model de pràctiques, els estudiants treballen en equips interdisciplinars per projectes, els tutors/es de la facultat conformen un equip de treball amb els tutors/es dels centres, dissenyen els plans d’acollida i de seguiment, i es fan les tutories de seguiment i treball en el centre entre totes les parts implicades. Els objectius plantejats en els dos anys de durada de l’MQD2006 eren: 1)Eliminar les dificultats tecnicoadministratives de la facultat que dificulten l’ampliació i consolidació del model MPI. 2)Comunicar i prestigiar el model MPI i la xarxa de centres d’excel·lència entre els estudiants, el professorat de la facultat i els propis centres. 3)Promoure la col·laboració, l’intercanvi de coneixement i els projectes d’innovació i recerca facultat-centres posant en contacte els grups de treball de la facultat i els centres, i mostrant els seus potencials. 4)Estructurar el nou model segons el model ECTs. 5)Analitzar i aprofundir en l’aplicació del treball per competències del nou model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through this study, we will measure how the collective MPI operations behaves in virtual and physical clusters, and its impact on the application performance. As we stated before, we will use as a test case the Weather Research and Forecasting simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El agoritmo Rho de Pollard es uno de los mejores conocidos para resolver el problema del logaritmo discreto. Se trata de una implementación de una paralelización utilizando MPI sobre un clúster. El lector encontrará en este proyecto el algoritmo de paralelización utilizado, así como, un conjunto de pruebas y resultados de la ejecución debidamente analizados.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'objectiu d'aquest projecte es implementar la versió en paral·lel de l'algorisme de Shanks en l'entorn MPI. L'algorisme de Shanks resol el problema del logaritme discret, problema en el qual basa la seva seguretat la xifra de clau pública ElGamal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest projecte presenta una breu introducció a la criptografia. S'expliquen principis fonamentals, com què és la criptografia i el criptoanàlisi els mètodes més rellevants de cada cas. Això servirà com a base teòrica per estudiar el funcionament del criptosistema de ElGamal, la seguretat del qual es basa en la dificultat de resoldre el problema del logaritme discret. Un cop tenim clar el problema del logaritme discret, s'implementarà una aplicació que el resolgui, mitjançant l'algorisme Rho de Pollard. Aquesta aplicació contarà amb el suport de la llibreria NTL, llibreria de nombres gegants, per poder implementar-la. Per acabarl, i com a principal objectiu, el que es pretén és implementar una aplicació paral·lela que resolgui el problema del logaritme discret en un entorn multicomputador utilitzant la proposta de Wiener i Oorschot.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[1] We present new analytical data of major and trace elements for the geological MPI-DING glasses KL2-G, ML3B-G, StHs6/80-G, GOR128-G, GOR132-G, BM90/21-G, T1-G, and ATHO-G. Different analytical methods were used to obtain a large spectrum of major and trace element data, in particular, EPMA, SIMS, LA-ICPMS, and isotope dilution by TIMS and ICPMS. Altogether, more than 60 qualified geochemical laboratories worldwide contributed to the analyses, allowing us to present new reference and information values and their uncertainties ( at 95% confidence level) for up to 74 elements. We complied with the recommendations for the certification of geological reference materials by the International Association of Geoanalysts (IAG). The reference values were derived from the results of 16 independent techniques, including definitive ( isotope dilution) and comparative bulk ( e. g., INAA, ICPMS, SSMS) and microanalytical ( e. g., LA-ICPMS, SIMS, EPMA) methods. Agreement between two or more independent methods and the use of definitive methods provided traceability to the fullest extent possible. We also present new and recently published data for the isotopic compositions of H, B, Li, O, Ca, Sr, Nd, Hf, and Pb. The results were mainly obtained by high-precision bulk techniques, such as TIMS and MC-ICPMS. In addition, LA-ICPMS and SIMS isotope data of B, Li, and Pb are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Synoptic activity over the Northern Hemisphere is evaluated in ensembles of ECHAM5/MPI-OM1 simulations for recent climate conditions (20C) and for three climate scenarios (following SRES A1B, A2, B1). A close agreement is found between the simulations for present day climate and the respective results from reanalysis. Significant changes in the winter mid-tropospheric storm tracks are detected in all three scenario simulations. Ensemble mean climate signals are rather similar, with particularly large activity increases downstream of the Atlantic storm track over Western Europe. The magnitude of this signal is largely dependent on the imposed change in forcing. However, differences between individual ensemble members may be large. With respect to the surface cyclones, the scenario runs produce a reduction in cyclonic track density over the mid-latitudes, even in the areas with increasing mid-tropospheric activity. The largest decrease in track densities occurs at subtropical latitudes, e.g., over the Mediterranean Basin. An increase of cyclone intensities is detected for limited areas (e.g., near Great Britain and Aleutian Isles) for the A1B and A2 experiments. The changes in synoptic activity are associated with alterations of the Northern Hemisphere circulation and background conditions (blocking frequencies, jet stream). The North Atlantic Oscillation index also shows increased values with enhanced forcing. With respect to the effects of changing synoptic activity, the regional change in cyclone intensities is accompanied by alterations of the extreme surface winds, with increasing values over Great Britain, North and Baltic Seas, as well as the areas with vanishing sea ice, and decreases over much of the subtropics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple storm loss model is applied to an ensemble of ECHAM5/MPI-OM1 GCM simulations in order to estimate changes of insured loss potentials over Europe in the 21st century. Losses are computed based on the daily maximum wind speed for each grid point. The calibration of the loss model is performed using wind data from the ERA40-Reanalysis and German loss data. The obtained annual losses for the present climate conditions (20C, three realisations) reproduce the statistical features of the historical insurance loss data for Germany. The climate change experiments correspond to the SRES-Scenarios A1B and A2, and for each of them three realisations are considered. On average, insured loss potentials increase for all analysed European regions at the end of the 21st century. Changes are largest for Germany and France, and lowest for Portugal/Spain. Additionally, the spread between the single realisations is large, ranging e.g. for Germany from −4% to +43% in terms of mean annual loss. Moreover, almost all simulations show an increasing interannual variability of storm damage. This assessment is even more pronounced if no adaptation of building structure to climate change is considered. The increased loss potentials are linked with enhanced values for the high percentiles of surface wind maxima over Western and Central Europe, which in turn are associated with an enhanced number and increased intensity of extreme cyclones over the British Isles and the North Sea.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper details a strategy for modifying the source code of a complex model so that the model may be used in a data assimilation context, {and gives the standards for implementing a data assimilation code to use such a model}. The strategy relies on keeping the model separate from any data assimilation code, and coupling the two through the use of Message Passing Interface (MPI) {functionality}. This strategy limits the changes necessary to the model and as such is rapid to program, at the expense of ultimate performance. The implementation technique is applied in different models with state dimension up to $2.7 \times 10^8$. The overheads added by using this implementation strategy in a coupled ocean-atmosphere climate model are shown to be an order of magnitude smaller than the addition of correlated stochastic random errors necessary for some nonlinear data assimilation techniques.