906 resultados para boundary element method
Resumo:
Heat treatment of steels is a process of fundamental importance in tailoring the properties of a material to the desired application; developing a model able to describe such process would allow to predict the microstructure obtained from the treatment and the consequent mechanical properties of the material. A steel, during a heat treatment, can undergo two different kinds of phase transitions [p.t.]: diffusive (second order p.t.) and displacive (first order p.t.); in this thesis, an attempt to describe both in a thermodynamically consistent framework is made; a phase field, diffuse interface model accounting for the coupling between thermal, chemical and mechanical effects is developed, and a way to overcome the difficulties arising from the treatment of the non-local effects (gradient terms) is proposed. The governing equations are the balance of linear momentum equation, the Cahn-Hilliard equation and the balance of internal energy equation. The model is completed with a suitable description of the free energy, from which constitutive relations are drawn. The equations are then cast in a variational form and different numerical techniques are used to deal with the principal features of the model: time-dependency, non-linearity and presence of high order spatial derivatives. Simulations are performed using DOLFIN, a C++ library for the automated solution of partial differential equations by means of the finite element method; results are shown for different test-cases. The analysis is reduced to a two dimensional setting, which is simpler than a three dimensional one, but still meaningful.
Resumo:
Metallische Objekte in der Größenordnung der optischen Wellenlänge zeigen Resonanzen im optischen Spektralbereich. Mit einer Kombination aus Kolloidlithographie, Metallfilmbedampfung und reaktivem Ionenstrahl¨atzen wurden Nanosicheln aus Gold bzw. Silber mit identischer Form und Orientierung in Sichelform mit einer Größe von 60nm bis 400nm hergestellt. Der Öffnungswinkel der Nanosicheln lässt sich kontinuierlich einstellen. Durch die einheitliche Orientierung lassen sich Messungen am Ensemble direkt auf das Verhalten des Einzelobjektes übertragen, wie ein Vergleich der Extinktionsspektren einer Ensemblemessung am UV/Vis/NIR-Spektrometer mit einer Einzelpartikelmessung in einem konfokalen Mikroskop zeigt. Die optische Antwort der Nanosicheln wurde als zwei-dimensionales Modell mit einer Finite Elemente Methode berechnet. Das Ergebnis sind mehrere polarisationsabhängige Resonanzen im optischen Spektrum. Diese lassen sich durch Variation des Öffnungswinkels und der Gr¨oße der Nanosichel verschieben. Durch Beleuchten lassen sich plasmonische Schwingungen anregen, die ein stark lokalisiertes Nahfeld an den Spitzen und in der Öffnung der Nanosicheln erzeugen. Das Nahfeld der Partikelresonanz wurde mit einer Fotolackmethode nachgewiesen. Die Untersuchungen am UV/Vis/NIR-Spektrometer zeigen mehrere polarisationsabhängige Resonanzen im Spektralbereich von 300 nm bis 3200 nm. Die Resonanzen der Nanosicheln lassen sich durch den Öffnungswinkel und den Durchmesser in der Größenordnung der Halbwertbreite im optischen Spektrum verschieben. In der Anwendung als Chemo- bzw. Biosensor zeigen Gold-Nanosicheln eine ähnliche Empfindlichkeit wie vergleichbare Sensoren auf der Basis von dünnen Metallstrukturen. Das Nahfeld zeichnet sich durch eine starke Lokalisierung aus und dringt, je nach Multipolordnung, zwischen 14 nm und 70 nm in die Umgebung ein. Quantenpunkte wurden an das Nahfeld der Nanosicheln gekoppelt. Die Emission der Quantenpunkte bei einer Wellenlänge von 860nm wird durch die Resonanz der Nanosicheln verstärkt. Die Nanosicheln wurden als optische Pinzette eingesetzt. Bei einer Anregung mit einem Laser bei einer Wellenlänge von 1064 nm wurden Polystyrolkolloide mit einem Durchmesser von 40 nm von den resonanten Nanosicheln eingefangen. Die Nanosicheln zeigen außergewöhnliche optische Eigenschaften, die mithilfe der Geometrieparameter über einen großen Bereich verändert werden können. Die ersten Anwendungen haben Anknüpfungspunkte zur Verwendung in der Sensorik, Fluoreszenzspektroskopie und als optische Pinzette aufgezeigt.
Resumo:
This research has focused on the study of the behavior and of the collapse of masonry arch bridges. The latest decades have seen an increasing interest in this structural type, that is still present and in use, despite the passage of time and the variation of the transport means. Several strategies have been developed during the time to simulate the response of this type of structures, although even today there is no generally accepted standard one for assessment of masonry arch bridges. The aim of this thesis is to compare the principal analytical and numerical methods existing in literature on case studies, trying to highlight values and weaknesses. The methods taken in exam are mainly three: i) the Thrust Line Analysis Method; ii) the Mechanism Method; iii) the Finite Element Methods. The Thrust Line Analysis Method and the Mechanism Method are analytical methods and derived from two of the fundamental theorems of the Plastic Analysis, while the Finite Element Method is a numerical method, that uses different strategies of discretization to analyze the structure. Every method is applied to the case study through computer-based representations, that allow a friendly-use application of the principles explained. A particular closed-form approach based on an elasto-plastic material model and developed by some Belgian researchers is also studied. To compare the three methods, two different case study have been analyzed: i) a generic masonry arch bridge with a single span; ii) a real masonry arch bridge, the Clemente Bridge, built on Savio River in Cesena. In the analyses performed, all the models are two-dimensional in order to have results comparable between the different methods taken in exam. The different methods have been compared with each other in terms of collapse load and of hinge positions.
Resumo:
This master’s thesis describes the research done at the Medical Technology Laboratory (LTM) of the Rizzoli Orthopedic Institute (IOR, Bologna, Italy), which focused on the characterization of the elastic properties of the trabecular bone tissue, starting from october 2012 to present. The approach uses computed microtomography to characterize the architecture of trabecular bone specimens. With the information obtained from the scanner, specimen-specific models of trabecular bone are generated for the solution with the Finite Element Method (FEM). Along with the FEM modelling, mechanical tests are performed over the same reconstructed bone portions. From the linear-elastic stage of mechanical tests presented by experimental results, it is possible to estimate the mechanical properties of the trabecular bone tissue. After a brief introduction on the biomechanics of the trabecular bone (chapter 1) and on the characterization of the mechanics of its tissue using FEM models (chapter 2), the reliability analysis of an experimental procedure is explained (chapter 3), based on the high-scalable numerical solver ParFE. In chapter 4, the sensitivity analyses on two different parameters for micro-FEM model’s reconstruction are presented. Once the reliability of the modeling strategy has been shown, a recent layout for experimental test, developed in LTM, is presented (chapter 5). Moreover, the results of the application of the new layout are discussed, with a stress on the difficulties connected to it and observed during the tests. Finally, a prototype experimental layout for the measure of deformations in trabecular bone specimens is presented (chapter 6). This procedure is based on the Digital Image Correlation method and is currently under development in LTM.
Resumo:
We have developed a method for locating sources of volcanic tremor and applied it to a dataset recorded on Stromboli volcano before and after the onset of the February 27th 2007 effusive eruption. Volcanic tremor has attracted considerable attention by seismologists because of its potential value as a tool for forecasting eruptions and for better understanding the physical processes that occur inside active volcanoes. Commonly used methods to locate volcanic tremor sources are: 1) array techniques, 2) semblance based methods, 3) calculation of wave field amplitude. We have choosen the third approach, using a quantitative modeling of the seismic wavefield. For this purpose, we have calculated the Green Functions (GF) in the frequency domain with the Finite Element Method (FEM). We have used this method because it is well suited to solve elliptic problems, as the elastodynamics in the Fourier domain. The volcanic tremor source is located by determining the source function over a regular grid of points. The best fit point is choosen as the tremor source location. The source inversion is performed in the frequency domain, using only the wavefield amplitudes. We illustrate the method and its validation over a synthetic dataset. We show some preliminary results on the Stromboli dataset, evidencing temporal variations of the volcanic tremor sources.
Resumo:
English: The assessment of safety in existing bridges and viaducts led the Ministry of Public Works of the Netherlands to finance a specific campaing aimed at the study of the response of the elements of these infrastructures. Therefore, this activity is focused on the investigation of the behaviour of reinforced concrete slabs under concentrated loads, adopting finite element modeling and comparison with experimental results. These elements are characterized by shear behaviour and crisi, whose modeling is, from a computational point of view, a hard challeng, due to the brittle behavior combined with three-dimensional effects. The numerical modeling of the failure is studied through Sequentially Linear Analysis (SLA), an alternative Finite Element method, with respect to traditional incremental and iterative approaches. The comparison between the two different numerical techniques represents one of the first works and comparisons in a three-dimensional environment. It's carried out adopting one of the experimental test executed on reinforced concrete slabs as well. The advantage of the SLA is to avoid the well known problems of convergence of typical non-linear analysis, by directly specifying a damage increment, in terms of reduction of stiffness and resistance in particular finite element, instead of load or displacement increasing on the whole structure . For the first time, particular attention has been paid to specific aspects of the slabs, like an accurate constraints modeling and sensitivity of the solution with respect to the mesh density. This detailed analysis with respect to the main parameters proofed a strong influence of the tensile fracture energy, mesh density and chosen model on the solution in terms of force-displacement diagram, distribution of the crack patterns and shear failure mode. The SLA showed a great potential, but it requires a further developments for what regards two aspects of modeling: load conditions (constant and proportional loads) and softening behaviour of brittle materials (like concrete) in the three-dimensional field, in order to widen its horizons in these new contexts of study.
Resumo:
I crescenti volumi di traffico che interessano le pavimentazioni stradali causano sollecitazioni tensionali di notevole entità che provocano danni permanenti alla sovrastruttura. Tali danni ne riducono la vita utile e comportano elevati costi di manutenzione. Il conglomerato bituminoso è un materiale multifase composto da inerti, bitume e vuoti d'aria. Le proprietà fisiche e le prestazioni della miscela dipendono dalle caratteristiche dell'aggregato, del legante e dalla loro interazione. L’approccio tradizionalmente utilizzato per la modellazione numerica del conglomerato bituminoso si basa su uno studio macroscopico della sua risposta meccanica attraverso modelli costitutivi al continuo che, per loro natura, non considerano la mutua interazione tra le fasi eterogenee che lo compongono ed utilizzano schematizzazioni omogenee equivalenti. Nell’ottica di un’evoluzione di tali metodologie è necessario superare questa semplificazione, considerando il carattere discreto del sistema ed adottando un approccio di tipo microscopico, che consenta di rappresentare i reali processi fisico-meccanici dai quali dipende la risposta macroscopica d’insieme. Nel presente lavoro, dopo una rassegna generale dei principali metodi numerici tradizionalmente impiegati per lo studio del conglomerato bituminoso, viene approfondita la teoria degli Elementi Discreti Particellari (DEM-P), che schematizza il materiale granulare come un insieme di particelle indipendenti che interagiscono tra loro nei punti di reciproco contatto secondo appropriate leggi costitutive. Viene valutata l’influenza della forma e delle dimensioni dell’aggregato sulle caratteristiche macroscopiche (tensione deviatorica massima) e microscopiche (forze di contatto normali e tangenziali, numero di contatti, indice dei vuoti, porosità, addensamento, angolo di attrito interno) della miscela. Ciò è reso possibile dal confronto tra risultati numerici e sperimentali di test triassiali condotti su provini costituiti da tre diverse miscele formate da sfere ed elementi di forma generica.
Resumo:
Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.
Resumo:
Laser Shock Peening (LSP) is a surface enhancement treatment which induces a significant layer of beneficial compressive residual stresses of up to several mm underneath the surface of metal components in order to improve the detrimental effects of the crack growth behavior rate in it. The aim of this thesis is to predict the crack growth behavior in metallic specimens with one or more stripes which define the compressive residual stress area induced by the Laser Shock Peening treatment. The process was applied as crack retardation stripes perpendicular to the crack propagation direction with the object of slowing down the crack when approaching the peened stripes. The finite element method has been applied to simulate the redistribution of stresses in a cracked model when it is subjected to a tension load and to a compressive residual stress field, and to evaluate the Stress Intensity Factor (SIF) in this condition. Finally, the Afgrow software is used to predict the crack growth behavior of the component following the Laser Shock Peening treatment and to detect the improvement in the fatigue life comparing it to the baseline specimen. An educational internship at the “Research & Technologies Germany – Hamburg” department of AIRBUS helped to achieve knowledge and experience to write this thesis. The main tasks of the thesis are the following: •To up to date Literature Survey related to “Laser Shock Peening in Metallic Structures” •To validate the FE model developed against experimental measurements at coupon level •To develop design of crack growth slowdown in Centered Cracked Tension specimens based on residual stress engineering approach using laser peened strip transversal to the crack path •To evaluate the Stress Intensity Factor values for Centered Cracked Tension specimens after the Laser Shock Peening treatment via Finite Element Analysis •To predict the crack growth behavior in Centered Cracked Tension specimens using as input the SIF values evaluated with the FE simulations •To validate the results by means of experimental tests
Resumo:
Die Kapillarkraft entsteht durch die Bildung eines Meniskus zwischen zwei Festkörpen. In dieser Doktorarbeit wurden die Auswirkungen von elastischer Verformung und Flϋssigkeitadsorption auf die Kapillarkraft sowohl theoretisch als auch experimentell untersucht. Unter Verwendung eines Rasterkraftmikroskops wurde die Kapillarkraft zwischen eines Siliziumoxid Kolloids von 2 µm Radius und eine weiche Oberfläche wie n.a. Polydimethylsiloxan oder Polyisopren, unter normalen Umgebungsbedingungen sowie in variierende Ethanoldampfdrϋcken gemessen. Diese Ergebnisse wurden mit den Kapillarkräften verglichen, die auf einem harten Substrat (Silizium-Wafer) unter denselben Bedingungen gemessen wurden. Wir beobachteten eine monotone Abnahme der Kapillarkraft mit zunehmendem Ethanoldampfdruck (P) fϋr P/Psat > 0,2, wobei Psat der Sättigungsdampfdruck ist.rnUm die experimentellen Ergebnisse zu erklären, wurde ein zuvor entwickeltes analytisches Modell (Soft Matter 2010, 6, 3930) erweitert, um die Ethanoladsorption zu berϋcksichtigen. Dieses neue analytische Modell zeigte zwei verschiedene Abhängigkeiten der Kapillarkraft von P/Psat auf harten und weichen Oberflächen. Fϋr die harte Oberfläche des Siliziumwafers wird die Abhängigkeit der Kapillarkraft vom Dampfdruck vom Verhältnis der Dicke der adsorbierten Ethanolschicht zum Meniskusradius bestimmt. Auf weichen Polymeroberflächen hingegen hängt die Kapillarkraft von der Oberflächenverformung und des Laplace-Drucks innerhalb des Meniskus ab. Eine Abnahme der Kapillarkraft mit zunehmendem Ethanoldampfdruck hat demnach eine Abnahme des Laplace-Drucks mit zunehmendem Meniskusradius zur folge. rnDie analytischen Berechnungen, fϋr die eine Hertzsche Kontakt-deformation angenommen wurde, wurden mit Finit Element Methode Simulationen verglichen, welche die reale Deformation des elastischen Substrats in der Nähe des Meniskuses explizit berϋcksichtigen. Diese zusätzliche nach oben gerichtete oberflächenverformung im Bereich des Meniskus fϋhrt zu einer weiteren Erhöhung der Kapillarkraft, insbesondere fϋr weiche Oberflächen mit Elastizitätsmodulen < 100 MPa.rn
Resumo:
For the past sixty years, waveguide slot radiator arrays have played a critical role in microwave radar and communication systems. They feature a well-characterized antenna element capable of direct integration into a low-loss feed structure with highly developed and inexpensive manufacturing processes. Waveguide slot radiators comprise some of the highest performance—in terms of side-lobe-level, efficiency, etc. — antenna arrays ever constructed. A wealth of information is available in the open literature regarding design procedures for linearly polarized waveguide slots. By contrast, despite their presence in some of the earliest published reports, little has been presented to date on array designs for circularly polarized (CP) waveguide slots. Moreover, that which has been presented features a classic traveling wave, efficiency-reducing beam tilt. This work proposes a unique CP waveguide slot architecture which mitigates these problems and a thorough design procedure employing widely available, modern computational tools. The proposed array topology features simultaneous dual-CP operation with grating-lobe-free, broadside radiation, high aperture efficiency, and good return loss. A traditional X-Slot CP element is employed with the inclusion of a slow wave structure passive phase shifter to ensure broadside radiation without the need for performance-limiting dielectric loading. It is anticipated this technology will be advantageous for upcoming polarimetric radar and Ka-band SatCom systems. The presented design methodology represents a philosophical shift away from traditional waveguide slot radiator design practices. Rather than providing design curves and/or analytical expressions for equivalent circuit models, simple first-order design rules – generated via parametric studies — are presented with the understanding that device optimization and design will be carried out computationally. A unit-cell, S-parameter based approach provides a sufficient reduction of complexity to permit efficient, accurate device design with attention to realistic, application-specific mechanical tolerances. A transparent, start-to-finish example of the design procedure for a linear sub-array at X-Band is presented. Both unit cell and array performance is calculated via finite element method simulations. Results are confirmed via good agreement with finite difference, time domain calculations. Array performance exhibiting grating-lobe-free, broadside-scanned, dual-CP radiation with better than 20 dB return loss and over 75% aperture efficiency is presented.
Resumo:
Direction-of-arrival (DOA) estimation is susceptible to errors introduced by the presence of real-ground and resonant size scatterers in the vicinity of the antenna array. To compensate for these errors pre-calibration and auto-calibration techniques are presented. The effects of real-ground constituent parameters on the mutual coupling (MC) of wire type antenna arrays for DOA estimation are investigated. This is accomplished by pre-calibration of the antenna array over the real-ground using the finite element method (FEM). The mutual impedance matrix is pre-estimated and used to remove the perturbations in the received terminal voltage. The unperturbed terminal voltage is incorporated in MUSIC algorithm to estimate DOAs. First, MC of quarter wave monopole antenna arrays is investigated. This work augments an existing MC compensation technique for ground-based antennas and proposes reduction in MC for antennas over finite ground as compared to the perfect ground. A factor of 4 decrease in both the real and imaginary parts of the MC is observed when considering a poor ground versus a perfectly conducting one for quarter wave monopoles in the receiving mode. A simulated result to show the compensation of errors direction of arrival (DOA) estimation with actual realization of the environment is also presented. Secondly, investigations for the effects on received MC of λ/2 dipole arrays placed near real-earth are carried out. As a rule of thumb, estimation of mutual coupling can be divided in two regions of antenna height that is very near ground 0
Resumo:
For half a century the integrated circuits (ICs) that make up the heart of electronic devices have been steadily improving by shrinking at an exponential rate. However, as the current crop of ICs get smaller and the insulating layers involved become thinner, electrons leak through due to quantum mechanical tunneling. This is one of several issues which will bring an end to this incredible streak of exponential improvement of this type of transistor device, after which future improvements will have to come from employing fundamentally different transistor architecture rather than fine tuning and miniaturizing the metal-oxide-semiconductor field effect transistors (MOSFETs) in use today. Several new transistor designs, some designed and built here at Michigan Tech, involve electrons tunneling their way through arrays of nanoparticles. We use a multi-scale approach to model these devices and study their behavior. For investigating the tunneling characteristics of the individual junctions, we use a first-principles approach to model conduction between sub-nanometer gold particles. To estimate the change in energy due to the movement of individual electrons, we use the finite element method to calculate electrostatic capacitances. The kinetic Monte Carlo method allows us to use our knowledge of these details to simulate the dynamics of an entire device— sometimes consisting of hundreds of individual particles—and watch as a device ‘turns on’ and starts conducting an electric current. Scanning tunneling microscopy (STM) and the closely related scanning tunneling spectroscopy (STS) are a family of powerful experimental techniques that allow for the probing and imaging of surfaces and molecules at atomic resolution. However, interpretation of the results often requires comparison with theoretical and computational models. We have developed a new method for calculating STM topographs and STS spectra. This method combines an established method for approximating the geometric variation of the electronic density of states, with a modern method for calculating spin-dependent tunneling currents, offering a unique balance between accuracy and accessibility.
Resumo:
Many methodologies dealing with prediction or simulation of soft tissue deformations on medical image data require preprocessing of the data in order to produce a different shape representation that complies with standard methodologies, such as mass–spring networks, finite element method s (FEM). On the other hand, methodologies working directly on the image space normally do not take into account mechanical behavior of tissues and tend to lack physics foundations driving soft tissue deformations. This chapter presents a method to simulate soft tissue deformations based on coupled concepts from image analysis and mechanics theory. The proposed methodology is based on a robust stochastic approach that takes into account material properties retrieved directly from the image, concepts from continuum mechanics and FEM. The optimization framework is solved within a hierarchical Markov random field (HMRF) which is implemented on the graphics processor unit (GPU See Graphics processing unit ).
Resumo:
Die erzielbare Fördergeschwindigkeit bei Vibrationsförderern hängt maßgeblich von der Bewegungsfunktion des Förderorganes ab. Für die gezielte Simulation dieser Anlagen mittels der diskreten Elemente Methode (DEM) ist es notwendig die geometrisch vernetzen Förderorgannachbildungen mit praxisrelevanten Bewegungsfunktionen zu beaufschlagen. Der Artikel beschreibt die Einbindung dieser Bewegungsfunktionen in die quellenoffene DEM-Software LIGGGHTS. Während des Simulationsprozesses wird eine Bewegung vernetzter CAD-Modelle durch trigonometrische Reihen ermöglicht.