899 resultados para Modeling. Simulation. Finite Differences Method
Simulação da suspensão tipo duplo A de um veículo off-road através do histórico de excitação do solo
Resumo:
The search for mechanical components validation methods, employed in product development sector, becomes more avid for less expensive solutions. As a result, programs that can simulate forces acting on a given part through finite element method are gaining more space in the market, once this process consumes less capital when compared to currently-employed empirical validation. This article shows the simulation of an off-road prototype suspension through such technique, using ground excitation history coming from field measurements and also by making use of a specific tool for obtaining dynamic loads from the model in question. The results shown at the end is key for future enhancements aiming mass reduction, for example, that may be executed on the prototype suspension system discussed here
Resumo:
Purpose - The purpose of this paper is to develop an efficient numerical algorithm for the self-consistent solution of Schrodinger and Poisson equations in one-dimensional systems. The goal is to compute the charge-control and capacitance-voltage characteristics of quantum wire transistors. Design/methodology/approach - The paper presents a numerical formulation employing a non-uniform finite difference discretization scheme, in which the wavefunctions and electronic energy levels are obtained by solving the Schrodinger equation through the split-operator method while a relaxation method in the FTCS scheme ("Forward Time Centered Space") is used to solve the two-dimensional Poisson equation. Findings - The numerical model is validated by taking previously published results as a benchmark and then applying them to yield the charge-control characteristics and the capacitance-voltage relationship for a split-gate quantum wire device. Originality/value - The paper helps to fulfill the need for C-V models of quantum wire device. To do so, the authors implemented a straightforward calculation method for the two-dimensional electronic carrier density n(x,y). The formulation reduces the computational procedure to a much simpler problem, similar to the one-dimensional quantization case, significantly diminishing running time.
Resumo:
Intravascular ultrasound (IVUS) phantoms are important to calibrate and evaluate many IVUS imaging processing tasks. However, phantom generation is never the primary focus of related works; hence, it cannot be well covered, and is usually based on more than one platform, which may not be accessible to investigators. Therefore, we present a framework for creating representative IVUS phantoms, for different intraluminal pressures, based on the finite element method and Field II. First, a coronary cross-section model is selected. Second, the coronary regions are identified to apply the properties. Third, the corresponding mesh is generated. Fourth, the intraluminal force is applied and the deformation computed. Finally, the speckle noise is incorporated. The framework was tested taking into account IVUS contrast, noise and strains. The outcomes are in line with related studies and expected values. Moreover, the framework toolbox is freely accessible and fully implemented in a single platform. (E-mail: fernando.okara@gmail.com) (c) 2012 World Federation for Ultrasound in Medicine & Biology.
Resumo:
This work describes a methodology to simulate free surface incompressible multiphase flows. This novel methodology allows the simulation of multiphase flows with an arbitrary number of phases, each of them having different densities and viscosities. Surface and interfacial tension effects are also included. The numerical technique is based on the GENSMAC front-tracking method. The velocity field is computed using a finite-difference discretization of a modification of the NavierStokes equations. These equations together with the continuity equation are solved for the two-dimensional multiphase flows, with different densities and viscosities in the different phases. The governing equations are solved on a regular Eulerian grid, and a Lagrangian mesh is employed to track free surfaces and interfaces. The method is validated by comparing numerical with analytic results for a number of simple problems; it was also employed to simulate complex problems for which no analytic solutions are available. The method presented in this paper has been shown to be robust and computationally efficient. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
The treatment of a transverse maxillary deficiency in skeletally mature individuals should include surgically assisted rapid palatal expansion. This study evaluated the distribution of stresses that affect the expander's anchor teeth using finite element analysis when the osteotomy is varied. Five virtual models were built and the surgically assisted rapid palatal expansion was simulated. Results showed tension on the lingual face of the teeth and alveolar bone, and compression on the buccal side of the alveolar bone. The subtotal Le Fort I osteotomy combined with intermaxillary suture osteotomy seemed to reduce the dissipation of tensions. Therefore, subtotal Le Fort I osteotomy without a step in the zygomaticomaxillary buttress, combined with intermaxillary suture osteotomy and pterygomaxillary disjunction may be the osteotomy of choice to reduce tensions on anchor teeth, which tend to move mesiobuccally (premolar) and distobuccally (molar)
Resumo:
The use of numerical simulation in the design and evaluation of products performance is ever increasing. To a greater extent, such estimates are needed in a early design stage, when physical prototypes are not available. When dealing with vibro-acoustic models, known to be computationally expensive, a question remains, which is related to the accuracy of such models in view of the well-know variability inherent to the mass manufacturing production techniques. In addition, both academia and industry have recently realized the importance of actually listening to a products sound, either by measurements or by virtual sound synthesis, in order to assess its performance. In this work, the scatter of significant parameter variations on a simplified vehicle vibro-acoustic model is calculated on loudness metrics using Monte Carlo analysis. The mapping from the system parameters to sound quality metric is performed by a fully-coupled vibro-acoustic finite element model. Different loudness metrics are used, including overall sound pressure level expressed in dB and Specific Loudness in Sones. Sound quality equivalent sources are used to excite this model and the sound pressure level at the driver's head position is acquired to be evaluated according to sound quality metrics. No significant variation has been perceived when evaluating the system using regular sound pressure level expressed in in dB and dB(A). This happens because of the third-octave filters that averages the results under some frequency bands. On the other hand, Zwicker Loudness presents important variations, arguably, due to the masking effects.
Resumo:
This work addresses the treatment of lower density regions of structures undergoing large deformations during the design process by the topology optimization method (TOM) based on the finite element method. During the design process the nonlinear elastic behavior of the structure is based on exact kinematics. The material model applied in the TOM is based on the solid isotropic microstructure with penalization approach. No void elements are deleted and all internal forces of the nodes surrounding the void elements are considered during the nonlinear equilibrium solution. The distribution of design variables is solved through the method of moving asymptotes, in which the sensitivity of the objective function is obtained directly. In addition, a continuation function and a nonlinear projection function are invoked to obtain a checkerboard free and mesh independent design. 2D examples with both plane strain and plane stress conditions hypothesis are presented and compared. The problem of instability is overcome by adopting a polyconvex constitutive model in conjunction with a suggested relaxation function to stabilize the excessive distorted elements. The exact tangent stiffness matrix is used. The optimal topology results are compared to the results obtained by using the classical Saint Venant–Kirchhoff constitutive law, and strong differences are found.
Resumo:
The most important property of austenitic stainless steels is corrosion resistance. In these steels, the transition between paramagnetic and ferromagnetic conditions occurs at low temperatures. Therefore, the use of austenitic stainless steels in conditions in which ferromagnetism absence is important can be considered. On the other hand, the formation of strain-induced martensite is detected when austenitic stainless steels are deformed as well as machined. The strain-induced martensite formed especially in the machining process is not uniform through the chip and its formation can also be related to the Md temperature. Therefore, both the temperature distribution and the gradient during the cutting and chip formation are important to identify regions in which martensite formation is propitiated. The main objective here is evaluate the strain-induced martensite formation throughout machining by observing microstructural features and comparing these to thermal results obtained through finite element method analysis. Results show that thermal analysis can give support to the martensite identified in the microstructural analysis.
Resumo:
This artwork reports on two different projects that were carried out during the three years of Doctor of the Philosophy course. In the first years a project regarding Capacitive Pressure Sensors Array for Aerodynamic Applications was developed in the Applied Aerodynamic research team of the Second Faculty of Engineering, University of Bologna, Forlì, Italy, and in collaboration with the ARCES laboratories of the same university. Capacitive pressure sensors were designed and fabricated, investigating theoretically and experimentally the sensor’s mechanical and electrical behaviours by means of finite elements method simulations and by means of wind tunnel tests. During the design phase, the sensor figures of merit are considered and evaluated for specific aerodynamic applications. The aim of this work is the production of low cost MEMS-alternative devices suitable for a sensor network to be implemented in air data system. The last two year was dedicated to a project regarding Wireless Pressure Sensor Network for Nautical Applications. Aim of the developed sensor network is to sense the weak pressure field acting on the sail plan of a full batten sail by means of instrumented battens, providing a real time differential pressure map over the entire sail surface. The wireless sensor network and the sensing unit were designed, fabricated and tested in the faculty laboratories. A static non-linear coupled mechanical-electrostatic simulation, has been developed to predict the pressure versus capacitance static characteristic suitable for the transduction process and to tune the geometry of the transducer to reach the required resolution, sensitivity and time response in the appropriate full scale pressure input A time dependent viscoelastic error model has been inferred and developed by means of experimental data in order to model, predict and reduce the inaccuracy bound due to the viscolelastic phenomena affecting the Mylar® polyester film used for the sensor diaphragm. The development of the two above mentioned subjects are strictly related but presently separately in this artwork.
Resumo:
[EN]In previous works, many authors have widely used mass consistent models for wind field simulation by the finite element method. On one hand, we have developed a 3-D mass consistent model by using tetrahedral meshes which are simultaneously adapted to complex orography and to terrain roughness length. In addition, we have included a local refinement strategy around several measurement or control points, significant contours, as for example shorelines, or numerical solution singularities. On the other hand, we have developed a 2.5-D model for simulating the wind velocity in a 3-D domain in terms of the terrain elevation, the surface temperature and the meteorological wind, which is consider as an averaged wind on vertical boundaries...
Resumo:
This master’s thesis describes the research done at the Medical Technology Laboratory (LTM) of the Rizzoli Orthopedic Institute (IOR, Bologna, Italy), which focused on the characterization of the elastic properties of the trabecular bone tissue, starting from october 2012 to present. The approach uses computed microtomography to characterize the architecture of trabecular bone specimens. With the information obtained from the scanner, specimen-specific models of trabecular bone are generated for the solution with the Finite Element Method (FEM). Along with the FEM modelling, mechanical tests are performed over the same reconstructed bone portions. From the linear-elastic stage of mechanical tests presented by experimental results, it is possible to estimate the mechanical properties of the trabecular bone tissue. After a brief introduction on the biomechanics of the trabecular bone (chapter 1) and on the characterization of the mechanics of its tissue using FEM models (chapter 2), the reliability analysis of an experimental procedure is explained (chapter 3), based on the high-scalable numerical solver ParFE. In chapter 4, the sensitivity analyses on two different parameters for micro-FEM model’s reconstruction are presented. Once the reliability of the modeling strategy has been shown, a recent layout for experimental test, developed in LTM, is presented (chapter 5). Moreover, the results of the application of the new layout are discussed, with a stress on the difficulties connected to it and observed during the tests. Finally, a prototype experimental layout for the measure of deformations in trabecular bone specimens is presented (chapter 6). This procedure is based on the Digital Image Correlation method and is currently under development in LTM.
Resumo:
We have developed a method for locating sources of volcanic tremor and applied it to a dataset recorded on Stromboli volcano before and after the onset of the February 27th 2007 effusive eruption. Volcanic tremor has attracted considerable attention by seismologists because of its potential value as a tool for forecasting eruptions and for better understanding the physical processes that occur inside active volcanoes. Commonly used methods to locate volcanic tremor sources are: 1) array techniques, 2) semblance based methods, 3) calculation of wave field amplitude. We have choosen the third approach, using a quantitative modeling of the seismic wavefield. For this purpose, we have calculated the Green Functions (GF) in the frequency domain with the Finite Element Method (FEM). We have used this method because it is well suited to solve elliptic problems, as the elastodynamics in the Fourier domain. The volcanic tremor source is located by determining the source function over a regular grid of points. The best fit point is choosen as the tremor source location. The source inversion is performed in the frequency domain, using only the wavefield amplitudes. We illustrate the method and its validation over a synthetic dataset. We show some preliminary results on the Stromboli dataset, evidencing temporal variations of the volcanic tremor sources.
Resumo:
English: The assessment of safety in existing bridges and viaducts led the Ministry of Public Works of the Netherlands to finance a specific campaing aimed at the study of the response of the elements of these infrastructures. Therefore, this activity is focused on the investigation of the behaviour of reinforced concrete slabs under concentrated loads, adopting finite element modeling and comparison with experimental results. These elements are characterized by shear behaviour and crisi, whose modeling is, from a computational point of view, a hard challeng, due to the brittle behavior combined with three-dimensional effects. The numerical modeling of the failure is studied through Sequentially Linear Analysis (SLA), an alternative Finite Element method, with respect to traditional incremental and iterative approaches. The comparison between the two different numerical techniques represents one of the first works and comparisons in a three-dimensional environment. It's carried out adopting one of the experimental test executed on reinforced concrete slabs as well. The advantage of the SLA is to avoid the well known problems of convergence of typical non-linear analysis, by directly specifying a damage increment, in terms of reduction of stiffness and resistance in particular finite element, instead of load or displacement increasing on the whole structure . For the first time, particular attention has been paid to specific aspects of the slabs, like an accurate constraints modeling and sensitivity of the solution with respect to the mesh density. This detailed analysis with respect to the main parameters proofed a strong influence of the tensile fracture energy, mesh density and chosen model on the solution in terms of force-displacement diagram, distribution of the crack patterns and shear failure mode. The SLA showed a great potential, but it requires a further developments for what regards two aspects of modeling: load conditions (constant and proportional loads) and softening behaviour of brittle materials (like concrete) in the three-dimensional field, in order to widen its horizons in these new contexts of study.
Resumo:
Plasmonen stellen elektromagnetische Moden in metallischen Strukturen dar, in denen die quasifreien Elektronen im Metall kollektiv oszillieren. Während des letzten Jahrzehnts erfuhr das Gebiet der Plasmonik eine rasante Entwicklung, basierend auf zunehmenden Fortschritten der Nanostrukturierungsmethoden und spektroskopischen Untersuchungsmethoden, die zu der Möglichkeit von systematischen Einzelobjektuntersuchungen wohldefinierter Nanostrukturen führte. Die Anregung von Plasmonen resultiert neben einer radiativen Verstärkung der optischen Streuintensität im Fernfeld in einer nicht-radiativen Überhöhung der Feldstärke in unmittelbarer Umgebung der metallischen Struktur (Nahfeld), die durch die kohärente Ladungsansammlung an der metallischen Oberfläche hervorgerufen wird. Das optische Nahfeld stellt folglich eine bedeutende Größe für das fundamentale Verständnis der Wirkung und Wechselwirkung von Plasmonen sowie für die Optimierung plasmonbasierter Applikationen dar. Die große Herausforderung liegt in der Kompliziertheit des experimentellen Zugangs zum Nahfeld, der die Entwicklung eines grundlegenden Verständisses des Nahfeldes verhinderte.rnIm Rahmen dieser Arbeit wurde Photoemissionselektronenmikroskopie (PEEM) bzw. -mikrospektroskopie genutzt, um ortsaufgelöst die Eigenschaften nahfeld-induzierter Elektronenemission zu bestimmen. Die elektrodynamischen Eigenschaften der untersuchten Systeme wurden zudem mit numerischen, auf der Finiten Integrationsmethode basierenden Berechnungen bestimmt und mit den experimentellen Resultaten verglichen.rnAg-Scheiben mit einem Durchmesser von 1µm und einer Höhe von 50nm wurden mit fs-Laserstrahlung der Wellenlänge 400nm unter verschiedenen Polarisationszuständen angeregt. Die laterale Verteilung der infolge eines 2PPE-Prozesses emittierten Elektronen wurde mit dem PEEM aufgenommen. Aus dem Vergleich mit den numerischen Berechnungen lässt sich folgern, dass sich das Nahfeld an unterschiedlichen Stellen der metallischen Struktur verschiedenartig ausbildet. Insbesondere wird am Rand der Scheibe bei s-polarisierter Anregung (verschwindende Vertikalkomponente des elektrischen Felds) ein Nahfeld mit endlicher z-Komponente induziert, während im Zentrum der Scheibe das Nahfeld stets proportional zum einfallenden elektrischen Feld ist.rnWeiterhin wurde erstmalig das Nahfeld optisch angeregter, stark gekoppelter Plasmonen spektral (750-850nm) untersucht und für identische Nanoobjekte mit den entsprechenden Fernfeldspektren verglichen. Dies erfolgte durch Messung der spektralen Streucharakteristik der Einzelobjekte mit einem Dunkelfeldkonfokalmikroskop. Als Modellsystem stark gekoppelter Plasmonen dienten Au Nanopartikel in sub-Nanometerabstand zu einem Au Film (nanoparticle on plane, NPOP). Mit Hilfe dieser Kombination aus komplementären Untersuchungsmethoden konnte erstmalig die spektrale Trennung von radiativen und nicht-radiativen Moden stark gekoppelter Plasmonen nachgewiesen werden. Dies ist insbesondere für Anwendungen von großer Relevanz, da reine Nahfeldmoden durch den unterdrückten radiativen Zerfall eine große Lebensdauer besitzen, so dass deren Verstärkungswirkung besonders lange nutzbar ist. Ursachen für die Unterschiede im spektralen Verhalten von Fern- und Nahfeld konnten durch numerische Berechnungen identifiziert werden. Sie zeigten, dass das Nahfeld nicht-spärischer NPOPs durch die komplexe Oszillationsbewegung der Elektronen innerhalb des Spaltes zwischen Partikel und Film stark ortsabhängig ist. Zudem reagiert das Nahfeld stark gekoppelter Plasmonen deutlich empfindlicher auf strukturelle Störstellen des Resonators als die Fernfeld-Response. Ferner wurde der Elektronenemissionsmechanismus als optischer Feldemissionsprozess identifiziert. Um den Vorgang beschreiben zu können, wurde die Fowler-Nordheim Theorie der statischen Feldemission für den Fall harmonisch oszillierender Felder modifiziert.
Resumo:
In dieser Arbeit wird ein neuer Dynamikkern entwickelt und in das bestehendernnumerische Wettervorhersagesystem COSMO integriert. Für die räumlichernDiskretisierung werden diskontinuierliche Galerkin-Verfahren (DG-Verfahren)rnverwendet, für die zeitliche Runge-Kutta-Verfahren. Hierdurch ist ein Verfahrenrnhoher Ordnung einfach zu realisieren und es sind lokale Erhaltungseigenschaftenrnder prognostischen Variablen gegeben. Der hier entwickelte Dynamikkern verwendetrngeländefolgende Koordinaten in Erhaltungsform für die Orographiemodellierung undrnkoppelt das DG-Verfahren mit einem Kessler-Schema für warmen Niederschlag. Dabeirnwird die Fallgeschwindigkeit des Regens, nicht wie üblich implizit imrnKessler-Schema diskretisiert, sondern explizit im Dynamikkern. Hierdurch sindrndie Zeitschritte der Parametrisierung für die Phasenumwandlung des Wassers undrnfür die Dynamik vollständig entkoppelt, wodurch auch sehr große Zeitschritte fürrndie Parametrisierung verwendet werden können. Die Kopplung ist sowohl fürrnOperatoraufteilung, als auch für Prozessaufteilung realisiert.rnrnAnhand idealisierter Testfälle werden die Konvergenz und die globalenrnErhaltungseigenschaften des neu entwickelten Dynamikkerns validiert. Die Massernwird bis auf Maschinengenauigkeit global erhalten. Mittels Bergüberströmungenrnwird die Orographiemodellierung validiert. Die verwendete Kombination ausrnDG-Verfahren und geländefolgenden Koordinaten ermöglicht die Behandlung vonrnsteileren Bergen, als dies mit dem auf Finite-Differenzenverfahren-basierendenrnDynamikkern von COSMO möglich ist. Es wird gezeigt, wann die vollernTensorproduktbasis und wann die Minimalbasis vorteilhaft ist. Die Größe desrnEinflusses auf das Simulationsergebnis der Verfahrensordnung, desrnParametrisierungszeitschritts und der Aufteilungsstrategie wirdrnuntersucht. Zuletzt wird gezeigt dass bei gleichem Zeitschritt die DG-Verfahrenrnaufgrund der besseren Skalierbarkeit in der Laufzeit konkurrenzfähig zurnFinite-Differenzenverfahren sind.