894 resultados para HPC parallel computer architecture queues fault tolerance programmability ADAM


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit wurden die Phasenübergänge einer einzelnen Polymerkette mit Hilfe der Monte Carlo Methode untersucht. Das Bondfluktuationsmodell wurde zur Simulation benutzt, wobei ein attraktives Kastenpotential zwischen allen Monomeren der Polymerkette gewirkt hat. Drei Arten von Bewegungen sind eingeführt worden, um die Polymerkette richtig zu relaxieren. Diese sind die Hüpfbewegung, die Reptationsbewegung und die Pivotbewegung. Um die Volumenausschlußwechselwirkung zu prüfen und um die Anzahl der Nachbarn jedes Monomers zu bestimmen ist ein hierarchischer Suchalgorithmus eingeführt worden. Die Zustandsdichte des Modells ist mittels des Wang-Landau Algorithmus bestimmt worden. Damit sind thermodynamische Größen berechnet worden, um die Phasenübergänge der einzelnen Polymerkette zu studieren. Wir haben zuerst eine freie Polymerkette untersucht. Der Knäuel-Kügelchen Übergang zeigt sich als ein kontinuierlicher Übergang, bei dem der Knäuel zum Kügelchen zusammenfällt. Der Kügelchen-Kügelchen Übergang bei niedrigeren Temperaturen ist ein Phasenübergang der ersten Ordnung, mit einer Koexistenz des flüssigen und festen Kügelchens, das eine kristalline Struktur hat. Im thermodynamischen Limes sind die Übergangstemperaturen identisch. Das entspricht einem Verschwinden der flüssigen Phase. In zwei Dimensionen zeigt das Modell einen kontinuierlichen Knäuel-Kügelchen Übergang mit einer lokal geordneten Struktur. Wir haben ferner einen Polymermushroom, das ist eine verankerte Polymerkette, zwischen zwei repulsiven Wänden im Abstand D untersucht. Das Phasenverhalten der Polymerkette zeigt einen dimensionalen crossover. Sowohl die Verankerung als auch die Beschränkung fördern den Knäuel-Kügelchen Übergang, wobei es eine Symmetriebrechung gibt, da die Ausdehnung der Polymerkette parallel zu den Wänden schneller schrumpft als die senkrecht zu den Wänden. Die Beschränkung hindert den Kügelchen-Kügelchen Übergang, wobei die Verankerung keinen Einfluss zu haben scheint. Die Übergangstemperaturen im thermodynamischen Limes sind wiederum identisch im Rahmen des Fehlers. Die spezifische Wärme des gleichen Modells aber mit einem abstoßendem Kastenpotential zeigt eine Schottky Anomalie, typisch für ein Zwei-Niveau System.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo simulations are used to study the effect of confinement on a crystal of point particles interacting with an inverse power law potential in d=2 dimensions. This system can describe colloidal particles at the air-water interface, a model system for experimental study of two-dimensional melting. It is shown that the state of the system (a strip of width D) depends very sensitively on the precise boundary conditions at the two ``walls'' providing the confinement. If one uses a corrugated boundary commensurate with the order of the bulk triangular crystalline structure, both orientational order and positional order is enhanced, and such surface-induced order persists near the boundaries also at temperatures where the system in the bulk is in its fluid state. However, using smooth repulsive boundaries as walls providing the confinement, only the orientational order is enhanced, but positional (quasi-) long range order is destroyed: The mean-square displacement of two particles n lattice parameters apart in the y-direction along the walls then crosses over from the logarithmic increase (characteristic for $d=2$) to a linear increase (characteristic for d=1). The strip then exhibits a vanishing shear modulus. These results are interpreted in terms of a phenomenological harmonic theory. Also the effect of incommensurability of the strip width D with the triangular lattice structure is discussed, and a comparison with surface effects on phase transitions in simple Ising- and XY-models is made

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work was to show that refined analyses of background, low magnitude seismicity allow to delineate the main active faults and to accurately estimate the directions of the regional tectonic stress that characterize the Southern Apennines (Italy), a structurally complex area with high seismic potential. Thanks the presence in the area of an integrated dense and wide dynamic network, was possible to analyzed an high quality microearthquake data-set consisting of 1312 events that occurred from August 2005 to April 2011 by integrating the data recorded at 42 seismic stations of various networks. The refined seismicity location and focal mechanisms well delineate a system of NW-SE striking normal faults along the Apenninic chain and an approximately E-W oriented, strike-slip fault, transversely cutting the belt. The seismicity along the chain does not occur on a single fault but in a volume, delimited by the faults activated during the 1980 Irpinia M 6.9 earthquake, on sub-parallel predominant normal faults. Results show that the recent low magnitude earthquakes belongs to the background seismicity and they are likely generated along the major fault segments activated during the most recent earthquakes, suggesting that they are still active today thirty years after the mainshock occurrences. In this sense, this study gives a new perspective to the application of the high quality records of low magnitude background seismicity for the identification and characterization of active fault systems. The analysis of the stress tensor inversion provides two equivalent models to explain the microearthquake generation along both the NW-SE striking normal faults and the E- W oriented fault with a dominant dextral strike-slip motion, but having different geological interpretations. We suggest that the NW-SE-striking Africa-Eurasia convergence acts in the background of all these structures, playing a primary and unifying role in the seismotectonics of the whole region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La crescente disponibilità di dispositivi meccanici e -soprattutto - elettronici le cui performance aumentano mentre il loro costo diminuisce, ha permesso al campo della robotica di compiere notevoli progressi. Tali progressi non sono stati fatti unicamente per ciò che riguarda la robotica per uso industriale, nelle catene di montaggio per esempio, ma anche per quella branca della robotica che comprende i robot autonomi domestici. Questi sistemi autonomi stanno diventando, per i suddetti motivi, sempre più pervasivi, ovvero sono immersi nello stesso ambiente nel quale vivono gli essere umani, e interagiscono con questi in maniera proattiva. Essi stanno compiendo quindi lo stesso percorso che hanno attraversato i personal computer all'incirca 30 anni fa, passando dall'essere costosi ed ingombranti mainframe a disposizione unicamente di enti di ricerca ed università, ad essere presenti all'interno di ogni abitazione, per un utilizzo non solo professionale ma anche di assistenza alle attività quotidiane o anche di intrattenimento. Per questi motivi la robotica è un campo dell'Information Technology che interessa sempre più tutti i tipi di programmatori software. Questa tesi analizza per prima cosa gli aspetti salienti della programmazione di controllori per robot autonomi (ovvero senza essere guidati da un utente), quindi, come l'approccio basato su agenti sia appropriato per la programmazione di questi sistemi. In particolare si mostrerà come un approccio ad agenti, utilizzando il linguaggio di programmazione Jason e quindi l'architettura BDI, sia una scelta significativa, dal momento che il modello sottostante a questo tipo di linguaggio è basato sul ragionamento pratico degli esseri umani (Human Practical Reasoning) e quindi è adatto alla implementazione di sistemi che agiscono in maniera autonoma. Dato che le possibilità di utilizzare un vero e proprio sistema autonomo per poter testare i controllori sono ridotte, per motivi pratici, economici e temporali, mostreremo come è facile e performante arrivare in maniera rapida ad un primo prototipo del robot tramite l'utilizzo del simulatore commerciale Webots. Il contributo portato da questa tesi include la possibilità di poter programmare un robot in maniera modulare e rapida per mezzo di poche linee di codice, in modo tale che l'aumento delle funzionalità di questo risulti un collo di bottiglia, come si verifica nella programmazione di questi sistemi tramite i classici linguaggi di programmazione imperativi. L'organizzazione di questa tesi prevede un capitolo di background nel quale vengono riportare le basi della robotica, della sua programmazione e degli strumenti atti allo scopo, un capitolo che riporta le nozioni di programmazione ad agenti, tramite il linguaggio Jason -quindi l'architettura BDI - e perché tale approccio è adatto alla programmazione di sistemi di controllo per la robotica. Successivamente viene presentata quella che è la struttura completa del nostro ambiente di lavoro software che comprende l'ambiente ad agenti e il simulatore, quindi nel successivo capitolo vengono mostrate quelle che sono le esplorazioni effettuate utilizzando Jason e un approccio classico (per mezzo di linguaggi classici), attraverso diversi casi di studio di crescente complessità; dopodiché, verrà effettuata una valutazione tra i due approcci analizzando i problemi e i vantaggi che comportano questi. Infine, la tesi terminerà con un capitolo di conclusioni e di riflessioni sulle possibili estensioni e lavori futuri.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex networks analysis is a very popular topic in computer science. Unfortunately this networks, extracted from different contexts, are usually very large and the analysis may be very complicated: computation of metrics on these structures could be very complex. Among all metrics we analyse the extraction of subnetworks called communities: they are groups of nodes that probably play the same role within the whole structure. Communities extraction is an interesting operation in many different fields (biology, economics,...). In this work we present a parallel community detection algorithm that can operate on networks with huge number of nodes and edges. After an introduction to graph theory and high performance computing, we will explain our design strategies and our implementation. Then, we will show some performance evaluation made on a distributed memory architectures i.e. the supercomputer IBM-BlueGene/Q "Fermi" at the CINECA supercomputing center, Italy, and we will comment our results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network Theory is a prolific and lively field, especially when it approaches Biology. New concepts from this theory find application in areas where extensive datasets are already available for analysis, without the need to invest money to collect them. The only tools that are necessary to accomplish an analysis are easily accessible: a computing machine and a good algorithm. As these two tools progress, thanks to technology advancement and human efforts, wider and wider datasets can be analysed. The aim of this paper is twofold. Firstly, to provide an overview of one of these concepts, which originates at the meeting point between Network Theory and Statistical Mechanics: the entropy of a network ensemble. This quantity has been described from different angles in the literature. Our approach tries to be a synthesis of the different points of view. The second part of the work is devoted to presenting a parallel algorithm that can evaluate this quantity over an extensive dataset. Eventually, the algorithm will also be used to analyse high-throughput data coming from biology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das Ziel dieser Arbeit bestand in der Untersuchung der Störungsverteilung und der Störungskinematik im Zusammenhang mit der Hebung der Riftschultern des Rwenzori Gebirges.rnDas Rwenzori Gebirge befindet sich im NNE-SSWbis N-S verlaufenden Albertine Rift, des nördlichsten Segments des westlichen Armes des Ostafrikanischen Grabensystems. Das Albertine Rift besteht aus Becken unterschiedlicher Höhe, die den Lake Albert, Lake Edward, Lake George und Lake Kivu enthalten. Der Rwenzori horst trennt die Becken des Lake Albert und des Lake Edward. Es erstreckt sich 120km in N-S Richtung, sowie 40-50km in E-W Richtung, der h¨ochste Punkt befindet sich 5111 ü. NN. Diese Studie untersucht einen Abschnitt des Rifts zwischen etwa 1°N und 0°30'S Breite sowie 29°30' und 30°30' östlicher Länge ersteckt. Auch die Feldarbeit konzentrierte sich auf dieses Gebiet.rnrnHauptzweck dieser Studie bestand darin, die folgende These auf ihre Richtigkeit zu überprüfen: ’Wenn es im Verlauf der Zeit tatsächlich zu wesentlichen Änderungen in der Störungskinematik kam, dann ist die starke Hebung der Riftflanken im Bereich der Rwenzoris nicht einfach durch Bewegung entlang der Graben-Hauptst¨orungen zu erklären. Vielmehr ist sie ein Resultat des Zusammenspiels mehrerer tektonische Prozesse, die das Spannungsfeld beeinflussen und dadurch Änderungen in der Kinematik hervorrufen.’ Dadurch konzentrierte sich die Studie in erster Linie auf die Störungsanalyse.rnrnDie Kenntnis regionaler Änderungen der Extensionsrichtung ist entscheidend für das Verständnis komplexer Riftsysteme wie dem Ostafrikanischen Graben. Daher bestand der Kern der Untersuchung in der Kartierung von Störungen und der Untersuchung der Störungskinematik. Die Aufnahme strukturgeologischer Daten konzentrierte sich auf die Ugandische Seite des Rifts, und Pal¨aospannungen wurden mit Hilfe von St¨orungsdaten durch Spannungsinversion rekonstruiert.rnDie unterschiedliche Orientierung spr¨oder Strukturen im Gelände, die geometrische Analyse der geologischen Strukturen sowie die Ergebnisse von Mikrostrukturen im Dünnschliff (Kapitel 4) weisen auf verschiedene Spannungsfelder hin, die auf mögliche Änderungen der Extensionsrichtung hinweisen. Die Resultate der Spannungsinversion sprechen für Ab-, Über- und Blattverschiebungen sowie für Schrägüberschiebungen (Kapitel 5). Aus der Orientierung der Abschiebungen gehen zwei verschiedene Extensionsrichtungen hervor: im Wesentlichen NW-SE Extension in fast allen Gebieten, sowie NNE-SSW Extension im östlichen Zentralbereich.rnAus der Analyse von Blattverschiebungen ergaben sich drei unterschiedliche Spannungszustände. Zum Einen NNW-SSE bis N-S Kompression in Verbindung mit ENE-WSW bzw E-W Extension wurde für die nördlichen und die zentralen Ruwenzoris ausgemacht. Ein zweiter Spannungszustand mit WNW-ESE Kompression/NNE-SSW Extension betraf die Zentralen Rwenzoris. Ein dritter Spannungszustand mit NNW-SSE Extension betraf den östlichen Zentralteil der Rwenzoris. Schrägüberschiebungen sind durch dazu schräge Achsen charakterisiert, die für N-S bis NNW-SSE Kompression sprechen und ausschließlich im östlichen Zentralabschnitt auftreten. Überschiebungen, die hauptsächlich in den zentralen und den östlichen Rwenzoris auftreten, sprechen für NE-SW orientierten σ2-Achsen und NW-SE Extension.rnrnEs konnten drei unterschiedliche Spannungseinflüsse identifiziert werden: auf die kollisionsbedingte Bildung eines Überschiebungssystem folgte intra-kratonische Kompression und schließlich extensionskontrollierte Riftbildung. Der Übergang zwischen den beiden letztgenannten Spannungszuständen erfolgte Schrittweise und erzeugte vermutlich lokal begrenzte Transpression und Transtension. Gegenw¨artig wird die Störungskinematik der Region durch ein tensiles Spannungsregime in NW-SE bis N-S Richtung bestimmt.rnrnLokale Spannungsvariationen werden dabei hauptsächlich durch die Interferenzrndes regionalen Spannungsfeldes mit lokalen Hauptst¨orungen verursacht. Weitere Faktoren die zu lokalen Veränderungen des Spannungsfeldes führen können sind unterschiedliche Hebungsgeschwindigkeiten, Blockrotation oder die Interaktion von Riftsegmenten. Um den Einfluß präexistenter Strukturen und anderer Bedingungen auf die Hebung der Rwenzoris zu ermitteln, wurde der Riftprozeß mit Hilfe eines analogen ’Sandbox’-Modells rekonstruiert (Kapitel 6). Da sich die Moho-Diskontinuität im Bereich des Arbeitsgebietes in einer Tiefe von 25 km befindet, aktive Störungen aber nur bis zu einer Tiefe von etwa 20 km beobachtet werden können (Koehn et al. 2008), wurden nur die oberen 25 km im Modell nachbebildet. Untersucht und mit Geländebeobachtungen verglichen wurden sowohl die Reihenfolge, in der Riftsegmente entstehen, als auch die Muster, die sich im Verlauf der Nukleierung und des Wachstums dieser Riftsegmente ausbilden. Das Hauptaugenmerk wurde auf die Entwicklung der beiden Subsegmente gelegt auf denen sich der Lake Albert bzw. der Lake Edward und der Lake George befinden, sowie auf das dazwischenliegende Rwenzori Gebirge. Das Ziel der Untersuchung bestand darin herauszufinden, in welcher Weise das südwärts propagierende Lake Albert-Subsegment mit dem sinistral versetzten nordwärts propagierenden Lake Edward/Lake George-Subsegment interagiert.rnrnVon besonderem Interesse war es, in welcherWeise die Strukturen innerhalb und außerhalb der Rwenzoris durch die Interaktion dieser Riftsegmente beeinflußt wurden. rnrnDrei verschiedene Versuchsreihen mit unterschiedlichen Randbedingungen wurden miteinander verglichen. Abhängig vom vorherrschenden Deformationstyp der Transferzone wurden die Reihen als ’Scherungs-dominiert’, ’Extensions-dominiert’ und als ’Rotations-dominiert’ charakterisiert. Die Beobachtung der 3-dimensionalen strukturellen Entwicklung der Riftsegmente wurde durch die Kombination von Modell-Aufsichten mit Profilschnitten ermöglicht. Von den drei genannten Versuchsreihen entwickelte die ’Rotationsdominierten’ Reihe einen rautenförmiger Block im Tranferbereich der beiden Riftsegmente, der sich um 5−20° im Uhrzeigersinn drehte. DieserWinkel liegt im Bereich des vermuteten Rotationswinkel des Rwenzori-Blocks (5°). Zusammengefasst untersuchen die Sandbox-Versuche den Einfluss präexistenter Strukturen und der Überlappung bzw. Überschneidung zweier interagierender Riftsegmente auf die Entwicklung des Riftsystems. Sie befassen sich darüber hinaus mit der Frage, welchen Einfluss Blockbildung und -rotation auf das lokale Stressfeld haben.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of Tissue Engineering is to develop biological substitutes that will restore lost morphological and functional features of diseased or damaged portions of organs. Recently computer-aided technology has received considerable attention in the area of tissue engineering and the advance of additive manufacture (AM) techniques has significantly improved control over the pore network architecture of tissue engineering scaffolds. To regenerate tissues more efficiently, an ideal scaffold should have appropriate porosity and pore structure. More sophisticated porous configurations with higher architectures of the pore network and scaffolding structures that mimic the intricate architecture and complexity of native organs and tissues are then required. This study adopts a macro-structural shape design approach to the production of open porous materials (Titanium foams), which utilizes spatial periodicity as a simple way to generate the models. From among various pore architectures which have been studied, this work simulated pore structure by triply-periodic minimal surfaces (TPMS) for the construction of tissue engineering scaffolds. TPMS are shown to be a versatile source of biomorphic scaffold design. A set of tissue scaffolds using the TPMS-based unit cell libraries was designed. TPMS-based Titanium foams were meant to be printed three dimensional with the relative predicted geometry, microstructure and consequently mechanical properties. Trough a finite element analysis (FEA) the mechanical properties of the designed scaffolds were determined in compression and analyzed in terms of their porosity and assemblies of unit cells. The purpose of this work was to investigate the mechanical performance of TPMS models trying to understand the best compromise between mechanical and geometrical requirements of the scaffolds. The intention was to predict the structural modulus in open porous materials via structural design of interconnected three-dimensional lattices, hence optimising geometrical properties. With the aid of FEA results, it is expected that the effective mechanical properties for the TPMS-based scaffold units can be used to design optimized scaffolds for tissue engineering applications. Regardless of the influence of fabrication method, it is desirable to calculate scaffold properties so that the effect of these properties on tissue regeneration may be better understood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer-Simulationen von Kolloidalen Fluiden in Beschränkten Geometrien Kolloidale Suspensionen, die einen Phasenübergang aufweisen, zeigen eine Vielfalt an interessanten Effekten, sobald sie auf eine bestimmte Geometrie beschränkt werden, wie zum Beispiel auf zylindrische Poren, sphärische Hohlräume oder auf einen Spalt mit ebenen Wänden. Der Einfluss dieser verschiedenen Geometrietypen sowohl auf das Phasenverhalten als auch auf die Dynamik von Kolloid-Polymer-Mischungen wird mit Hilfe von Computer-Simulationen unter Verwendung des Asakura-Oosawa- Modells, für welches auf Grund der “Depletion”-Kräfte ein Phasenübergang existiert, untersucht. Im Fall von zylindrischen Poren sieht man ein interessantes Phasenverhalten, welches vom eindimensionalen Charakter des Systems hervorgerufen wird. In einer kurzen Pore findet man im Bereich des Phasendiagramms, in dem das System typischerweise entmischt, entweder eine polymerreiche oder eine kolloidreiche Phase vor. Sobald aber die Länge der zylindrischen Pore die typische Korrelationslänge entlang der Zylinderachse überschreitet, bilden sich mehrere quasi-eindimensionale Bereiche der polymerreichen und der kolloidreichen Phase, welche von nun an koexistieren. Diese Untersuchungen helfen das Verhalten von Adsorptionshysteresekurven in entsprechenden Experimenten zu erklären. Wenn das Kolloid-Polymer-Modellsystem auf einen sphärischen Hohlraum eingeschränkt wird, verschiebt sich der Punkt des Phasenübergangs von der polymerreichen zur kolloidreichen Phase. Es wird gezeigt, dass diese Verschiebung direkt von den Benetzungseigenschaften des Systems abhängt, was die Beobachtung von zwei verschiedenen Morphologien bei Phasenkoexistenz ermöglicht – Schalenstrukturen und Strukturen des Janustyps. Im Rahmen der Untersuchung von heterogener Keimbildung von Kristallen innerhalb einer Flüssigkeit wird eine neue Simulationsmethode zur Berechnung von Freien Energien der Grenzfläche zwischen Kristall- bzw. Flüssigkeitsphase undWand präsentiert. Die Resultate für ein System von harten Kugeln und ein System einer Kolloid- Polymer-Mischung werden anschließend zur Bestimmung von Kontaktwinkeln von Kristallkeimen an Wänden verwendet. Die Dynamik der Phasenseparation eines quasi-zweidimensionalen Systems, welche sich nach einem Quench des Systems aus dem homogenen Zustand in den entmischten Zustand ausbildet, wird mit Hilfe von einer mesoskaligen Simulationsmethode (“Multi Particle Collision Dynamics”) untersucht, die sich für eine detaillierte Untersuchung des Einflusses der hydrodynamischen Wechselwirkung eignet. Die Exponenten universeller Potenzgesetze, die das Wachstum der mittleren Domänengröße beschreiben, welche für rein zwei- bzw. dreidimensionale Systeme bekannt sind, können für bestimmte Parameterbereiche nachgewiesen werden. Die unterschiedliche Dynamik senkrecht bzw. parallel zu den Wänden sowie der Einfluss der Randbedingungen für das Lösungsmittel werden untersucht. Es wird gezeigt, dass die daraus resultierende Abschirmung der hydrodynamischen Wechselwirkungsreichweite starke Auswirkungen auf das Wachstum der mittleren Domänengröße hat.