905 resultados para Distributed systems, modeling, composites, finite elements


Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]The dynamic throug-soil interaction between nearby pile supported structures in a viscoelastic half-space, under incident S and Rayleigh waves, is numerically studied. To this end, a three-dimensional viscoelastic BEM-FEM formulation for the dynamic analysis of piles and pile groups in the frequency domain is used, where soil is modelled by BEM and piles are simulated by one-dimensional finite elements as Bernouilli beams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]We present advances of the meccano method [1,2] for tetrahedral mesh generation and volumetric parameterization of solids. The method combines several former procedures: a mapping from the meccano boundary to the solid surface, a 3-D local refinement algorithm and a simultaneous mesh untangling and smoothing. The key of the method lies in defining a one-to-one volumetric transformation between the parametric and physical domains. Results with adaptive finite elements will be shown for several engineering problems. In addition, the application of the method to T-spline modelling and isogeometric analysis [3,4] of complex geometries will be introduced…

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In computer systems, specifically in multithread, parallel and distributed systems, a deadlock is both a very subtle problem - because difficult to pre- vent during the system coding - and a very dangerous one: a deadlocked system is easily completely stuck, with consequences ranging from simple annoyances to life-threatening circumstances, being also in between the not negligible scenario of economical losses. Then, how to avoid this problem? A lot of possible solutions has been studied, proposed and implemented. In this thesis we focus on detection of deadlocks with a static program analysis technique, i.e. an analysis per- formed without actually executing the program. To begin, we briefly present the static Deadlock Analysis Model devel- oped for coreABS−− in chapter 1, then we proceed by detailing the Class- based coreABS−− language in chapter 2. Then, in Chapter 3 we lay the foundation for further discussions by ana- lyzing the differences between coreABS−− and ASP, an untyped Object-based calculi, so as to show how it can be possible to extend the Deadlock Analysis to Object-based languages in general. In this regard, we explicit some hypotheses in chapter 4 first by present- ing a possible, unproven type system for ASP, modeled after the Deadlock Analysis Model developed for coreABS−−. Then, we conclude our discussion by presenting a simpler hypothesis, which may allow to circumvent the difficulties that arises from the definition of the ”ad-hoc” type system discussed in the aforegoing chapter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ZusammenfassungrnDie vorliegende Arbeit beschreibt Experimente mit einer Apparatur namens Mikro-rnSISAK, die in der Lage ist, eine Flüssig-Flüssig-Extraktion im Mikroliter-Maßstab durchzuführen. Dabei werden zwei nicht mischbare Flüssigkeiten in einer Mikrostruktur emulgiert und anschließend über eine Teflonmembran wieder entmischt.rnIn ersten Experimenten wurden verschiedene Extraktionssysteme für Elemente derrnGruppen 4 und 7 des Periodensystems der Elemente untersucht und die Ergebnisse mit denen aus Schüttelversuchen verglichen. Da die zunächst erreichten Extraktionsausbeuten nicht ausreichend waren, wurden verschiedene Maßnahmen zu deren Verbesserung herangezogen.rnZunächst hat man mit Hilfe eines an die MikroSISAK-Apparatur angelegten Heizelements die dort für die Extraktion herrschende Temperatur erhöht. Dies führte wie erhofft zu einer höheren Extraktionsausbeute.rnDes Weiteren wurde MikroSISAK vom Institut für Mikrotechnik Mainz, welches derrnEntwickler und Konstrukteur der Apparatur ist, durch eine Erweiterung modifiziert, um den Kontakt der beiden Phasen zwischen Mischer und Separationseinheit zu verlängern. Auch dies verbesserte der Extraktionsausbeute.rnNun erschienen die erzielten Ergebnisse als ausreichend, um die Apparatur für online-Experimente an den TRIGA-Reaktor Mainz zu koppeln. Hierfür wurden durch Kernreaktion erzeugte Spaltprodukte des Technetiums MikroSISAK zugeführt, um sie dort abzutrennen und anschließend über ihren Zerfall an einem Detektor nachzuweisen. Neben erfolgreichen Ergebnissen lieferten diese Experimente auch die Belege für die Funktionsfähigkeit eines neuen Entgasers und für die Möglichkeit sowohl diesen als auch ein adäquates Detektorsystem an die MikroSISAK-Apparatur anzuschließen.rnDies schafft die Voraussetzung für die eigentliche Anwendungsidee, die hinter der Entwicklung von MikroSISAK steckt: Die Untersuchung der chemischen Eigenschaften von kurzlebigen superschweren Elementen (SHE) an einem Schwerionenbeschleuniger. Es liegt nahe, solche Experimente für das schwere Homologe des Technetiums, Element 107, Bohrium, ins Auge zu fassen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis work encloses activities carried out in the Laser Center of the Polytechnic University of Madrid and the laboratories of the University of Bologna in Forlì. This thesis focuses on the superficial mechanical treatment for metallic materials called Laser Shock Peening (LSP). This process is a surface enhancement treatment which induces a significant layer of beneficial compressive residual stresses underneath the surface of metal components in order to improve the detrimental effects of the crack growth behavior rate in it. The innovation aspect of this work is the LSP application to specimens with extremely low thickness. In particular, after a bibliographic study and comparison with the main treatments used for the same purposes, this work analyzes the physics of the operation of a laser, its interaction with the surface of the material and the generation of the surface residual stresses which are fundamentals to obtain the LSP benefits. In particular this thesis work regards the application of this treatment to some Al2024-T351 specimens with low thickness. Among the improvements that can be obtained performing this operation, the most important in the aeronautic field is the fatigue life improvement of the treated components. As demonstrated in this work, a well-done LSP treatment can slow down the progress of the defects in the material that could lead to sudden failure of the structure. A part of this thesis is the simulation of this phenomenon using the program AFGROW, with which have been analyzed different geometric configurations of the treatment, verifying which was better for large panels of typical aeronautical interest. The core of the LSP process are the residual stresses that are induced on the material by the interaction with the laser light, these can be simulated with the finite elements but it is essential to verify and measure them experimentally. In the thesis are introduced the main methods for the detection of those stresses, they can be mechanical or by diffraction. In particular, will be described the principles and the detailed realization method of the Hole Drilling measure and an introduction of the X-ray Diffraction; then will be presented the results I obtained with both techniques. In addition to these two measurement techniques will also be introduced Neutron Diffraction method. The last part refers to the experimental tests of the fatigue life of the specimens, with a detailed description of the apparatus and the procedure used from the initial specimen preparation to the fatigue test with the press. Then the obtained results are exposed and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il grande sviluppo a cui stiamo assistendo negli ultimi anni nell'ambito delle tecnologie mobile e del wearable computing apre le porte a scenari innovativi e interessanti per quanto riguarda sistemi distribuiti collaborativi. Le persone possono sempre più facilmente cooperare e scambiarsi informazioni grazie a queste nuove tecnologie e si può pensare allo sviluppo di sistemi che permettano forme molto avanzate di collaborazione facendo leva sull'aspetto hands-free, ovvero sulla possibilità di utilizzare dispositivi che liberino le mani, come i moderni smart-glasses. Per lo sviluppo di tali sistemi è necessario però studiare nuove tecniche e architetture, in quanto gli strumenti ad oggi a disposizione a supporto della realtà aumentata non sembrano essere del tutto adeguati per tale scopo. Infatti piattaforme come Wikitude o Layar, seppure offrano potenti tecniche di riconoscimento di markers e immagini e di rendering, non offrono quella dinamicità fondamentale per un sistema distribuito collaborativo. Questo scritto ha lo scopo di esplorare questi aspetti mediante l'ideazione, l'analisi, la progettazione e la prototipazione di un semplice caso di studio ispirato a sistemi collaborativi distribuiti basati su realtà aumentata. In particolare in questo lavoro si porrà l'attenzione sul livello delle comunicazioni e delle infrastrutture di rete.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary goal of this work is related to the extension of an analytic electro-optical model. It will be used to describe single-junction crystalline silicon solar cells and a silicon/perovskite tandem solar cell in the presence of light-trapping in order to calculate efficiency limits for such a device. In particular, our tandem system is composed by crystalline silicon and a perovskite structure material: metilammoniumleadtriiodide (MALI). Perovskite are among the most convenient materials for photovoltaics thanks to their reduced cost and increasing efficiencies. Solar cell efficiencies of devices using these materials increased from 3.8% in 2009 to a certified 20.1% in 2014 making this the fastest-advancing solar technology to date. Moreover, texturization increases the amount of light which can be absorbed through an active layer. Using Green’s formalism it is possible to calculate the photogeneration rate of a single-layer structure with Lambertian light trapping analytically. In this work we go further: we study the optical coupling between the two cells in our tandem system in order to calculate the photogeneration rate of the whole structure. We also model the electronic part of such a device by considering the perovskite top cell as an ideal diode and solving the drift-diffusion equation with appropriate boundary conditions for the silicon bottom cell. We have a four terminal structure, so our tandem system is totally unconstrained. Then we calculate the efficiency limits of our tandem including several recombination mechanisms such as Auger, SRH and surface recombination. We focus also on the dependence of the results on the band gap of the perovskite and we calculare an optimal band gap to optimize the tandem efficiency. The whole work has been continuously supported by a numerical validation of out analytic model against Silvaco ATLAS which solves drift-diffusion equations using a finite elements method. Our goal is to develop a simpler and cheaper, but accurate model to study such devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-stabilization is a property of a distributed system such that, regardless of the legitimacy of its current state, the system behavior shall eventually reach a legitimate state and shall remain legitimate thereafter. The elegance of self-stabilization stems from the fact that it distinguishes distributed systems by a strong fault tolerance property against arbitrary state perturbations. The difficulty of designing and reasoning about self-stabilization has been witnessed by many researchers; most of the existing techniques for the verification and design of self-stabilization are either brute-force, or adopt manual approaches non-amenable to automation. In this dissertation, we first investigate the possibility of automatically designing self-stabilization through global state space exploration. In particular, we develop a set of heuristics for automating the addition of recovery actions to distributed protocols on various network topologies. Our heuristics equally exploit the computational power of a single workstation and the available parallelism on computer clusters. We obtain existing and new stabilizing solutions for classical protocols like maximal matching, ring coloring, mutual exclusion, leader election and agreement. Second, we consider a foundation for local reasoning about self-stabilization; i.e., study the global behavior of the distributed system by exploring the state space of just one of its components. It turns out that local reasoning about deadlocks and livelocks is possible for an interesting class of protocols whose proof of stabilization is otherwise complex. In particular, we provide necessary and sufficient conditions – verifiable in the local state space of every process – for global deadlock- and livelock-freedom of protocols on ring topologies. Local reasoning potentially circumvents two fundamental problems that complicate the automated design and verification of distributed protocols: (1) state explosion and (2) partial state information. Moreover, local proofs of convergence are independent of the number of processes in the network, thereby enabling our assertions about deadlocks and livelocks to apply on rings of arbitrary sizes without worrying about state explosion.