949 resultados para Parallel programming (computer)
Resumo:
Increased accessibility to high-performance computing resources has created a demand for user support through performance evaluation tools like the iSPD (iconic Simulator for Parallel and Distributed systems), a simulator based on iconic modelling for distributed environments such as computer grids. It was developed to make it easier for general users to create their grid models, including allocation and scheduling algorithms. This paper describes how schedulers are managed by iSPD and how users can easily adopt the scheduling policy that improves the system being simulated. A thorough description of iSPD is given, detailing its scheduler manager. Some comparisons between iSPD and Simgrid simulations, including runs of the simulated environment in a real cluster, are also presented. 2012 IEEE.
Resumo:
Simulation of large and complex systems, such as computing grids, is a difficult task. Current simulators, despite providing accurate results, are significantly hard to use. They usually demand a strong knowledge of programming, what is not a standard pattern in today's users of grids and high performance computing. The need for computer expertise prevents these users from simulating how the environment will respond to their applications, what may imply in large loss of efficiency, wasting precious computational resources. In this paper we introduce iSPD, iconic Simulator of Parallel and Distributed Systems, which is a simulator where grid models are produced through an iconic interface. We describe the simulator and its intermediate model languages. Results presented here provide an insight in its easy-of-use and accuracy.
Resumo:
Research on the micro-structural characterization of metal-matrix composites uses X-ray computed tomography to collect information about the interior features of the samples, in order to elucidate their exhibited properties. The tomographic raw data needs several steps of computational processing in order to eliminate noise and interference. Our experience with a program (Tritom) that handles these questions has shown that in some cases the processing steps take a very long time and that it is not easy for a Materials Science specialist to interact with Tritom in order to define the most adequate parameter values and the proper sequence of the available processing steps. For easing the use of Tritom, a system was built which addresses the aspects described before and that is based on the OpenDX visualization system. OpenDX visualization facilities constitute a great benefit to Tritom. The visual programming environment of OpenDX allows an easy definition of a sequence of processing steps thus fulfilling the requirement of an easy use by non-specialists on Computer Science. Also the possibility of incorporating external modules in a visual OpenDX program allows the researchers to tackle the aspect of reducing the long execution time of some processing steps. The longer processing steps of Tritom have been parallelized in two different types of hardware architectures (message-passing and shared-memory); the corresponding parallel programs can be easily incorporated in a sequence of processing steps defined in an OpenDX program. The benefits of our system are illustrated through an example where the tool is applied in the study of the sensitivity to crushing and the implications thereof of the reinforcements used in a functionally graded syntactic metallic foam.
Resumo:
Consider the NP-hard problem of, given a simple graph G, to find a series-parallel subgraph of G with the maximum number of edges. The algorithm that, given a connected graph G, outputs a spanning tree of G, is a 1/2-approximation. Indeed, if n is the number of vertices in G, any spanning tree in G has n-1 edges and any series-parallel graph on n vertices has at most 2n-3 edges. We present a 7/12 -approximation for this problem and results showing the limits of our approach.
Resumo:
Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kanes formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.
Resumo:
<p>[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...</p>
Resumo:
In dieser Arbeit wurden die Phasenbergnge einer einzelnen Polymerkette mit Hilfe der Monte Carlo Methode untersucht. Das Bondfluktuationsmodell wurde zur Simulation benutzt, wobei ein attraktives Kastenpotential zwischen allen Monomeren der Polymerkette gewirkt hat. Drei Arten von Bewegungen sind eingefhrt worden, um die Polymerkette richtig zu relaxieren. Diese sind die Hpfbewegung, die Reptationsbewegung und die Pivotbewegung. Um die Volumenausschluwechselwirkung zu prfen und um die Anzahl der Nachbarn jedes Monomers zu bestimmen ist ein hierarchischer Suchalgorithmus eingefhrt worden. Die Zustandsdichte des Modells ist mittels des Wang-Landau Algorithmus bestimmt worden. Damit sind thermodynamische Gren berechnet worden, um die Phasenbergnge der einzelnen Polymerkette zu studieren. Wir haben zuerst eine freie Polymerkette untersucht. Der Knuel-Kgelchen bergang zeigt sich als ein kontinuierlicher bergang, bei dem der Knuel zum Kgelchen zusammenfllt. Der Kgelchen-Kgelchen bergang bei niedrigeren Temperaturen ist ein Phasenbergang der ersten Ordnung, mit einer Koexistenz des flssigen und festen Kgelchens, das eine kristalline Struktur hat. Im thermodynamischen Limes sind die bergangstemperaturen identisch. Das entspricht einem Verschwinden der flssigen Phase. In zwei Dimensionen zeigt das Modell einen kontinuierlichen Knuel-Kgelchen bergang mit einer lokal geordneten Struktur. Wir haben ferner einen Polymermushroom, das ist eine verankerte Polymerkette, zwischen zwei repulsiven Wnden im Abstand D untersucht. Das Phasenverhalten der Polymerkette zeigt einen dimensionalen crossover. Sowohl die Verankerung als auch die Beschrnkung frdern den Knuel-Kgelchen bergang, wobei es eine Symmetriebrechung gibt, da die Ausdehnung der Polymerkette parallel zu den Wnden schneller schrumpft als die senkrecht zu den Wnden. Die Beschrnkung hindert den Kgelchen-Kgelchen bergang, wobei die Verankerung keinen Einfluss zu haben scheint. Die bergangstemperaturen im thermodynamischen Limes sind wiederum identisch im Rahmen des Fehlers. Die spezifische Wrme des gleichen Modells aber mit einem abstoendem Kastenpotential zeigt eine Schottky Anomalie, typisch fr ein Zwei-Niveau System.
Resumo:
The term "Brain Imaging" identies a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeoff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are constructed, and to difficulties related to the management of notions typical of programming languages like variable binding. This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers: - the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code; - a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art. In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours: - a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita; - a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.
Resumo:
Monte Carlo simulations are used to study the effect of confinement on a crystal of point particles interacting with an inverse power law potential in d=2 dimensions. This system can describe colloidal particles at the air-water interface, a model system for experimental study of two-dimensional melting. It is shown that the state of the system (a strip of width D) depends very sensitively on the precise boundary conditions at the two ``walls'' providing the confinement. If one uses a corrugated boundary commensurate with the order of the bulk triangular crystalline structure, both orientational order and positional order is enhanced, and such surface-induced order persists near the boundaries also at temperatures where the system in the bulk is in its fluid state. However, using smooth repulsive boundaries as walls providing the confinement, only the orientational order is enhanced, but positional (quasi-) long range order is destroyed: The mean-square displacement of two particles n lattice parameters apart in the y-direction along the walls then crosses over from the logarithmic increase (characteristic for $d=2$) to a linear increase (characteristic for d=1). The strip then exhibits a vanishing shear modulus. These results are interpreted in terms of a phenomenological harmonic theory. Also the effect of incommensurability of the strip width D with the triangular lattice structure is discussed, and a comparison with surface effects on phase transitions in simple Ising- and XY-models is made
Resumo:
La crescente disponibilit di dispositivi meccanici e -soprattutto - elettronici le cui performance aumentano mentre il loro costo diminuisce, ha permesso al campo della robotica di compiere notevoli progressi. Tali progressi non sono stati fatti unicamente per ci che riguarda la robotica per uso industriale, nelle catene di montaggio per esempio, ma anche per quella branca della robotica che comprende i robot autonomi domestici. Questi sistemi autonomi stanno diventando, per i suddetti motivi, sempre pi pervasivi, ovvero sono immersi nello stesso ambiente nel quale vivono gli essere umani, e interagiscono con questi in maniera proattiva. Essi stanno compiendo quindi lo stesso percorso che hanno attraversato i personal computer all'incirca 30 anni fa, passando dall'essere costosi ed ingombranti mainframe a disposizione unicamente di enti di ricerca ed universit, ad essere presenti all'interno di ogni abitazione, per un utilizzo non solo professionale ma anche di assistenza alle attivit quotidiane o anche di intrattenimento. Per questi motivi la robotica un campo dell'Information Technology che interessa sempre pi tutti i tipi di programmatori software. Questa tesi analizza per prima cosa gli aspetti salienti della programmazione di controllori per robot autonomi (ovvero senza essere guidati da un utente), quindi, come l'approccio basato su agenti sia appropriato per la programmazione di questi sistemi. In particolare si mostrer come un approccio ad agenti, utilizzando il linguaggio di programmazione Jason e quindi l'architettura BDI, sia una scelta significativa, dal momento che il modello sottostante a questo tipo di linguaggio basato sul ragionamento pratico degli esseri umani (Human Practical Reasoning) e quindi adatto alla implementazione di sistemi che agiscono in maniera autonoma. Dato che le possibilit di utilizzare un vero e proprio sistema autonomo per poter testare i controllori sono ridotte, per motivi pratici, economici e temporali, mostreremo come facile e performante arrivare in maniera rapida ad un primo prototipo del robot tramite l'utilizzo del simulatore commerciale Webots. Il contributo portato da questa tesi include la possibilit di poter programmare un robot in maniera modulare e rapida per mezzo di poche linee di codice, in modo tale che l'aumento delle funzionalit di questo risulti un collo di bottiglia, come si verifica nella programmazione di questi sistemi tramite i classici linguaggi di programmazione imperativi. L'organizzazione di questa tesi prevede un capitolo di background nel quale vengono riportare le basi della robotica, della sua programmazione e degli strumenti atti allo scopo, un capitolo che riporta le nozioni di programmazione ad agenti, tramite il linguaggio Jason -quindi l'architettura BDI - e perch tale approccio adatto alla programmazione di sistemi di controllo per la robotica. Successivamente viene presentata quella che la struttura completa del nostro ambiente di lavoro software che comprende l'ambiente ad agenti e il simulatore, quindi nel successivo capitolo vengono mostrate quelle che sono le esplorazioni effettuate utilizzando Jason e un approccio classico (per mezzo di linguaggi classici), attraverso diversi casi di studio di crescente complessit; dopodich, verr effettuata una valutazione tra i due approcci analizzando i problemi e i vantaggi che comportano questi. Infine, la tesi terminer con un capitolo di conclusioni e di riflessioni sulle possibili estensioni e lavori futuri.
Resumo:
Complex networks analysis is a very popular topic in computer science. Unfortunately this networks, extracted from different contexts, are usually very large and the analysis may be very complicated: computation of metrics on these structures could be very complex. Among all metrics we analyse the extraction of subnetworks called communities: they are groups of nodes that probably play the same role within the whole structure. Communities extraction is an interesting operation in many different fields (biology, economics,...). In this work we present a parallel community detection algorithm that can operate on networks with huge number of nodes and edges. After an introduction to graph theory and high performance computing, we will explain our design strategies and our implementation. Then, we will show some performance evaluation made on a distributed memory architectures i.e. the supercomputer IBM-BlueGene/Q "Fermi" at the CINECA supercomputing center, Italy, and we will comment our results.
Resumo:
Computer-Simulationen von Kolloidalen Fluiden in Beschrnkten Geometrien Kolloidale Suspensionen, die einen Phasenbergang aufweisen, zeigen eine Vielfalt an interessanten Effekten, sobald sie auf eine bestimmte Geometrie beschrnkt werden, wie zum Beispiel auf zylindrische Poren, sphrische Hohlrume oder auf einen Spalt mit ebenen Wnden. Der Einfluss dieser verschiedenen Geometrietypen sowohl auf das Phasenverhalten als auch auf die Dynamik von Kolloid-Polymer-Mischungen wird mit Hilfe von Computer-Simulationen unter Verwendung des Asakura-Oosawa- Modells, fr welches auf Grund der Depletion-Krfte ein Phasenbergang existiert, untersucht. Im Fall von zylindrischen Poren sieht man ein interessantes Phasenverhalten, welches vom eindimensionalen Charakter des Systems hervorgerufen wird. In einer kurzen Pore findet man im Bereich des Phasendiagramms, in dem das System typischerweise entmischt, entweder eine polymerreiche oder eine kolloidreiche Phase vor. Sobald aber die Lnge der zylindrischen Pore die typische Korrelationslnge entlang der Zylinderachse berschreitet, bilden sich mehrere quasi-eindimensionale Bereiche der polymerreichen und der kolloidreichen Phase, welche von nun an koexistieren. Diese Untersuchungen helfen das Verhalten von Adsorptionshysteresekurven in entsprechenden Experimenten zu erklren. Wenn das Kolloid-Polymer-Modellsystem auf einen sphrischen Hohlraum eingeschrnkt wird, verschiebt sich der Punkt des Phasenbergangs von der polymerreichen zur kolloidreichen Phase. Es wird gezeigt, dass diese Verschiebung direkt von den Benetzungseigenschaften des Systems abhngt, was die Beobachtung von zwei verschiedenen Morphologien bei Phasenkoexistenz ermglicht Schalenstrukturen und Strukturen des Janustyps. Im Rahmen der Untersuchung von heterogener Keimbildung von Kristallen innerhalb einer Flssigkeit wird eine neue Simulationsmethode zur Berechnung von Freien Energien der Grenzflche zwischen Kristall- bzw. Flssigkeitsphase undWand prsentiert. Die Resultate fr ein System von harten Kugeln und ein System einer Kolloid- Polymer-Mischung werden anschlieend zur Bestimmung von Kontaktwinkeln von Kristallkeimen an Wnden verwendet. Die Dynamik der Phasenseparation eines quasi-zweidimensionalen Systems, welche sich nach einem Quench des Systems aus dem homogenen Zustand in den entmischten Zustand ausbildet, wird mit Hilfe von einer mesoskaligen Simulationsmethode (Multi Particle Collision Dynamics) untersucht, die sich fr eine detaillierte Untersuchung des Einflusses der hydrodynamischen Wechselwirkung eignet. Die Exponenten universeller Potenzgesetze, die das Wachstum der mittleren Domnengre beschreiben, welche fr rein zwei- bzw. dreidimensionale Systeme bekannt sind, knnen fr bestimmte Parameterbereiche nachgewiesen werden. Die unterschiedliche Dynamik senkrecht bzw. parallel zu den Wnden sowie der Einfluss der Randbedingungen fr das Lsungsmittel werden untersucht. Es wird gezeigt, dass die daraus resultierende Abschirmung der hydrodynamischen Wechselwirkungsreichweite starke Auswirkungen auf das Wachstum der mittleren Domnengre hat.
Resumo:
In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m oglichst gut durch Computersysteme unterstu tzen zu k onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ahrt. Diese erzeugen eine groe Anzahl von zuf alligen Lagen fu r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ltigkeit zu u berpru fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu r diese Klasse von Planern stellen sogenannte narrow passages dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m oglicherweise n otig, ausgeklu geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer groen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu llk operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren narrow passages. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus atzlich haben wir den Planer mit einer Freidru ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul osen und so die Effizienz in Bereichen mit eingeschr ankter Bewegungsfreiheit zu erh ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere narrow passages aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht offentlich zug anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.
Resumo:
Software evolution research has focused mostly on analyzing the evolution of single software systems. However, it is rarely the case that a project exists as standalone, independent of others. Rather, projects exist in parallel within larger contexts in companies, research groups or even the open-source communities. We call these contexts software ecosystems, and on this paper we present The Small Project Observatory, a prototype tool which aims to support the analysis of project ecosystems through interactive visualization and exploration. We present a case-study of exploring an ecosystem using our tool, we describe about the architecture of the tool, and we distill the lessons learned during the tool-building experience.