336 resultados para MPI
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Synoptic activity over the Northern Hemisphere is evaluated in ensembles of ECHAM5/MPI-OM1 simulations for recent climate conditions (20C) and for three climate scenarios (following SRES A1B, A2, B1). A close agreement is found between the simulations for present day climate and the respective results from reanalysis. Significant changes in the winter mid-tropospheric storm tracks are detected in all three scenario simulations. Ensemble mean climate signals are rather similar, with particularly large activity increases downstream of the Atlantic storm track over Western Europe. The magnitude of this signal is largely dependent on the imposed change in forcing. However, differences between individual ensemble members may be large. With respect to the surface cyclones, the scenario runs produce a reduction in cyclonic track density over the mid-latitudes, even in the areas with increasing mid-tropospheric activity. The largest decrease in track densities occurs at subtropical latitudes, e.g., over the Mediterranean Basin. An increase of cyclone intensities is detected for limited areas (e.g., near Great Britain and Aleutian Isles) for the A1B and A2 experiments. The changes in synoptic activity are associated with alterations of the Northern Hemisphere circulation and background conditions (blocking frequencies, jet stream). The North Atlantic Oscillation index also shows increased values with enhanced forcing. With respect to the effects of changing synoptic activity, the regional change in cyclone intensities is accompanied by alterations of the extreme surface winds, with increasing values over Great Britain, North and Baltic Seas, as well as the areas with vanishing sea ice, and decreases over much of the subtropics.
Resumo:
A simple storm loss model is applied to an ensemble of ECHAM5/MPI-OM1 GCM simulations in order to estimate changes of insured loss potentials over Europe in the 21st century. Losses are computed based on the daily maximum wind speed for each grid point. The calibration of the loss model is performed using wind data from the ERA40-Reanalysis and German loss data. The obtained annual losses for the present climate conditions (20C, three realisations) reproduce the statistical features of the historical insurance loss data for Germany. The climate change experiments correspond to the SRES-Scenarios A1B and A2, and for each of them three realisations are considered. On average, insured loss potentials increase for all analysed European regions at the end of the 21st century. Changes are largest for Germany and France, and lowest for Portugal/Spain. Additionally, the spread between the single realisations is large, ranging e.g. for Germany from −4% to +43% in terms of mean annual loss. Moreover, almost all simulations show an increasing interannual variability of storm damage. This assessment is even more pronounced if no adaptation of building structure to climate change is considered. The increased loss potentials are linked with enhanced values for the high percentiles of surface wind maxima over Western and Central Europe, which in turn are associated with an enhanced number and increased intensity of extreme cyclones over the British Isles and the North Sea.
Resumo:
This paper details a strategy for modifying the source code of a complex model so that the model may be used in a data assimilation context, {and gives the standards for implementing a data assimilation code to use such a model}. The strategy relies on keeping the model separate from any data assimilation code, and coupling the two through the use of Message Passing Interface (MPI) {functionality}. This strategy limits the changes necessary to the model and as such is rapid to program, at the expense of ultimate performance. The implementation technique is applied in different models with state dimension up to $2.7 \times 10^8$. The overheads added by using this implementation strategy in a coupled ocean-atmosphere climate model are shown to be an order of magnitude smaller than the addition of correlated stochastic random errors necessary for some nonlinear data assimilation techniques.
Resumo:
The new Max-Planck-Institute Earth System Model (MPI-ESM) is used in the Coupled Model Intercomparison Project phase 5 (CMIP5) in a series of climate change experiments for either idealized CO2-only forcing or forcings based on observations and the Representative Concentration Pathway (RCP) scenarios. The paper gives an overview of the model configurations, experiments related forcings, and initialization procedures and presents results for the simulated changes in climate and carbon cycle. It is found that the climate feedback depends on the global warming and possibly the forcing history. The global warming from climatological 1850 conditions to 2080–2100 ranges from 1.5°C under the RCP2.6 scenario to 4.4°C under the RCP8.5 scenario. Over this range, the patterns of temperature and precipitation change are nearly independent of the global warming. The model shows a tendency to reduce the ocean heat uptake efficiency toward a warmer climate, and hence acceleration in warming in the later years. The precipitation sensitivity can be as high as 2.5% K−1 if the CO2 concentration is constant, or as small as 1.6% K−1, if the CO2 concentration is increasing. The oceanic uptake of anthropogenic carbon increases over time in all scenarios, being smallest in the experiment forced by RCP2.6 and largest in that for RCP8.5. The land also serves as a net carbon sink in all scenarios, predominantly in boreal regions. The strong tropical carbon sources found in the RCP2.6 and RCP8.5 experiments are almost absent in the RCP4.5 experiment, which can be explained by reforestation in the RCP4.5 scenario.
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This work presents a study about the use of standards and directions on parallel programming in distributed systems, using the MPI standard and PETSc toolkit, performing an analysis of their performances over certain mathematic operations involving matrices. The concepts are used to develop applications to solve problems involving Principal Components Analysis (PCA), which are executed in a Beowulf cluster. The results are compared to the ones of an analogous application with sequencial execution, and then it is analized if there was any performance boost on the parallel application
Resumo:
Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.
Resumo:
Herz-Kreislauf-Erkrankungen zählen weltweit zu den Hauptursachen, die zu frühzeitigem Tod führen. Pathophysiologisch liegt eine Gefäßwandverdickung durch Ablagerung arteriosklerotischer Plaques (Arteriosklerose) vor. Die molekulare Bildgebung mit den nuklearmedizinischen Verfahren SPECT und PET zielt darauf ab, minderperfundierte Myokardareale zu visualisieren, um den Krankheitsverlauf durch frühzeitige Therapie abschwächen zu können. Routinemäßig eingesetzt werden die SPECT-Perfusionstracer [99mTc]Sestamibi und [99mTc]Tetrofosmin. Zum Goldstandard für die Quantifizierung der Myokardperfusion werden allerdings die PET-Tracer [13N]NH3 und [15O]H2O, da eine absolute Bestimmung des Blutflusses in mL/min/g sowohl in der Ruhe als auch bei Belastung möglich ist. 2007 wurde [18F]Flurpiridaz als neuer Myokardtracer vorgestellt, dessen Bindung an den MC I sowohl in Ratten, Hasen, Primaten als auch in ersten klinischen Humanstudien eine selektive Myokardaufnahme zeigte. Um eine Verfügbarkeit des Radionuklids über einen Radionuklidgenerator gewährleisten zu können, sollten makrozyklische 68Ga-Myokard-Perfusionstracer auf Pyridaben-Basis synthetisiert und evaluiert werden. Die neue Tracer-Klasse setzte sich aus dem makrozyklischen Chelator, einem Linker und dem Insektizid Pyridaben als Targeting-Vektor zusammen. Struktur-Affinitätsbeziehungen konnten auf Grund von Variation des Linkers (Länge und Polarität), der Komplexladung (neutral und einfach positiv geladen), des Chelators (DOTA, NODAGA, DO2A) sowie durch einen Multivalenzansatz (Monomer und Dimer) aufgestellt werden. Insgesamt wurden 16 neue Verbindungen synthetisiert. Ihre 68Ga-Markierung wurde hinsichtlich pH-Wert, Temperatur, Vorläufermenge und Reaktionszeit optimiert. Die DOTA/NODAGA-Pyridaben-Derivate ließen sich mit niedrigen Substanzmengen (6 - 25 nmol) in 0,1 M HEPES-Puffer (pH 3,4) bei 95°C innerhalb 15 min mit Ausbeuten > 95 % markieren. Für die DO2A-basierenden Verbindungen bedurfte es einer mikrowellengestützen Markierung (300 W, 1 min, 150°C), um vergleichbare Ausbeuten zu erzielen. Die in vitro-Stabilitätstests aller Verbindungen erfolgten in EtOH, NaCl und humanem Serum. Es konnten keine Instabilitäten innerhalb 80 min bei 37°C festgestellt werden. Unter Verwendung der „shake flask“-Methode wurden die Lipophilien (log D = -1,90 – 1,91) anhand des Verteilungs-quotienten in Octanol/PBS-Puffer ermittelt. Die kalten Referenzsubstanzen wurden mit GaCl3 hergestellt und zur Bestimmung der IC50-Werte (34,1 µM – 1 µM) in vitro auf ihre Affinität zum MC I getestet. In vivo-Evaluierungen erfolgten mit den zwei potentesten Verbindungen [68Ga]VN160.MZ und [68Ga]VN167.MZ durch µ-PET-Aufnahmen (n=3) in gesunden Ratten über 60 min. Um die Organverteilung ermitteln zu können, wurden ex vivo-Biodistributionsstudien (n=3) vorgenommen. Sowohl die µ-PET-Untersuchungen als auch die Biodistributionsstudien zeigten, dass es bei [68Ga]VN167.MZ zwar zu einer Herzaufnahme kam, die jedoch eher perfusionsabhängig ist. Eine Retention des Tracers im Myokard konnte in geringem Umfang festgestellt werden.