308 resultados para Discretization
Resumo:
This thesis describes modelling tools and methods suited for complex systems (systems that typically are represented by a plurality of models). The basic idea is that all models representing the system should be linked by well-defined model operations in order to build a structured repository of information, a hierarchy of models. The port-Hamiltonian framework is a good candidate to solve this kind of problems as it supports the most important model operations natively. The thesis in particular addresses the problem of integrating distributed parameter systems in a model hierarchy, and shows two possible mechanisms to do that: a finite-element discretization in port-Hamiltonian form, and a structure-preserving model order reduction for discretized models obtainable from commercial finite-element packages.
Resumo:
Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.
Resumo:
Porous materials are widely used in many fields of industrial applications, to achieve the requirements of noise reduction, that nowadays derive from strict regulations. The modeling of porous materials is still a problematic issue. Numerical simulations are often problematic in case of real complex geometries, especially in terms of computational times and convergence. At the same time, analytical models, even if partly limited by restrictive simplificative hypotheses, represent a powerful instrument to capture quickly the physics of the problem and general trends. In this context, a recently developed numerical method, called the Cell Method, is described, is presented in the case of the Biot's theory and applied for representative cases. The peculiarity of the Cell Method is that it allows for a direct algebraic and geometrical discretization of the field equations, without any reduction to a weak integral form. Then, the second part of the thesis presents the case of interaction between two poroelastic materials under the context of double porosity. The idea of using periodically repeated inclusions of a second porous material into a layer composed by an original material is described. In particular, the problem is addressed considering the efficiency of the analytical method. A analytical procedure for the simulation of heterogeneous layers based is described and validated considering both conditions of absorption and transmission; a comparison with the available numerical methods is performed. ---------------- I materiali porosi sono ampiamente utilizzati per diverse applicazioni industriali, al fine di raggiungere gli obiettivi di riduzione del rumore, che sono resi impegnativi da norme al giorno d'oggi sempre più stringenti. La modellazione dei materiali porori per applicazioni vibro-acustiche rapprensenta un aspetto di una certa complessità. Le simulazioni numeriche sono spesso problematiche quando siano coinvolte geometrie di pezzi reali, in particolare riguardo i tempi computazionali e la convergenza. Allo stesso tempo, i modelli analitici, anche se parzialmente limitati a causa di ipotesi semplificative che ne restringono l'ambito di utilizzo, rappresentano uno strumento molto utile per comprendere rapidamente la fisica del problema e individuare tendenze generali. In questo contesto, un metodo numerico recentemente sviluppato, il Metodo delle Celle, viene descritto, implementato nel caso della teoria di Biot per la poroelasticità e applicato a casi rappresentativi. La peculiarità del Metodo delle Celle consiste nella discretizzazione diretta algebrica e geometrica delle equazioni di campo, senza alcuna riduzione a forme integrali deboli. Successivamente, nella seconda parte della tesi viene presentato il caso delle interazioni tra due materiali poroelastici a contatto, nel contesto dei materiali a doppia porosità. Viene descritta l'idea di utilizzare inclusioni periodicamente ripetute di un secondo materiale poroso all'interno di un layer a sua volta poroso. In particolare, il problema è studiando il metodo analitico e la sua efficienza. Una procedura analitica per il calcolo di strati eterogenei di materiale viene descritta e validata considerando sia condizioni di assorbimento, sia di trasmissione; viene effettuata una comparazione con i metodi numerici a disposizione.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.
Resumo:
Über viele Jahre hinweg wurden wieder und wieder Argumente angeführt, die diskreten Räumen gegenüber kontinuierlichen Räumen eine fundamentalere Rolle zusprechen. Unser Zugangzur diskreten Welt wird durch neuere Überlegungen der Nichtkommutativen Geometrie (NKG) bestimmt. Seit ca. 15Jahren gibt es Anstrengungen und auch Fortschritte, Physikmit Hilfe von Nichtkommutativer Geometrie besser zuverstehen. Nur eine von vielen Möglichkeiten ist dieReformulierung des Standardmodells derElementarteilchenphysik. Unter anderem gelingt es, auch denHiggs-Mechanismus geometrisch zu beschreiben. Das Higgs-Feld wird in der NKG in Form eines Zusammenhangs auf einer zweielementigen Menge beschrieben. In der Arbeit werden verschiedene Ziele erreicht:Quantisierung einer nulldimensionalen ,,Raum-Zeit'', konsistente Diskretisierungf'ur Modelle im nichtkommutativen Rahmen.Yang-Mills-Theorien auf einem Punkt mit deformiertemHiggs-Potenzial. Erweiterung auf eine ,,echte''Zwei-Punkte-Raum-Zeit, Abzählen von Feynman-Graphen in einer nulldimensionalen Theorie, Feynman-Regeln. Eine besondere Rolle werden Termini, die in derQuantenfeldtheorie ihren Ursprung haben, gewidmet. In diesemRahmen werden Begriffe frei von Komplikationen diskutiert,die durch etwaige Divergenzen oder Schwierigkeitentechnischer Natur verursacht werden könnten.Eichfixierungen, Geistbeiträge, Slavnov-Taylor-Identität undRenormierung. Iteratives Lösungsverfahren derDyson-Schwinger-Gleichung mit Computeralgebra-Unterstützung,die Renormierungsprozedur berücksichtigt.
Resumo:
This thesis presents new methods to simulate systems with hydrodynamic and electrostatic interactions. Part 1 is devoted to computer simulations of Brownian particles with hydrodynamic interactions. The main influence of the solvent on the dynamics of Brownian particles is that it mediates hydrodynamic interactions. In the method, this is simulated by numerical solution of the Navier--Stokes equation on a lattice. To this end, the Lattice--Boltzmann method is used, namely its D3Q19 version. This model is capable to simulate compressible flow. It gives us the advantage to treat dense systems, in particular away from thermal equilibrium. The Lattice--Boltzmann equation is coupled to the particles via a friction force. In addition to this force, acting on {it point} particles, we construct another coupling force, which comes from the pressure tensor. The coupling is purely local, i.~e. the algorithm scales linearly with the total number of particles. In order to be able to map the physical properties of the Lattice--Boltzmann fluid onto a Molecular Dynamics (MD) fluid, the case of an almost incompressible flow is considered. The Fluctuation--Dissipation theorem for the hybrid coupling is analyzed, and a geometric interpretation of the friction coefficient in terms of a Stokes radius is given. Part 2 is devoted to the simulation of charged particles. We present a novel method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. This algorithm scales linearly, too. We focus on the Molecular Dynamics version of the method and show that it is intimately related to the Car--Parrinello approach, while being equivalent to solving Maxwell's equations with freely adjustable speed of light. The Lagrangian formulation of the coupled particles--fields system is derived. The quasi--Hamiltonian dynamics of the system is studied in great detail. For implementation on the computer, the equations of motion are discretized with respect to both space and time. The discretization of the electromagnetic fields on a lattice, as well as the interpolation of the particle charges on the lattice is given. The algorithm is as local as possible: Only nearest neighbors sites of the lattice are interacting with a charged particle. Unphysical self--energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green's function. The method allows easy parallelization using standard domain decomposition. Some benchmarking results of the algorithm are presented and discussed.
Resumo:
In this work we are concerned with the analysis and numerical solution of Black-Scholes type equations arising in the modeling of incomplete financial markets and an inverse problem of determining the local volatility function in a generalized Black-Scholes model from observed option prices. In the first chapter a fully nonlinear Black-Scholes equation which models transaction costs arising in option pricing is discretized by a new high order compact scheme. The compact scheme is proved to be unconditionally stable and non-oscillatory and is very efficient compared to classical schemes. Moreover, it is shown that the finite difference solution converges locally uniformly to the unique viscosity solution of the continuous equation. In the next chapter we turn to the calibration problem of computing local volatility functions from market data in a generalized Black-Scholes setting. We follow an optimal control approach in a Lagrangian framework. We show the existence of a global solution and study first- and second-order optimality conditions. Furthermore, we propose an algorithm that is based on a globalized sequential quadratic programming method and a primal-dual active set strategy, and present numerical results. In the last chapter we consider a quasilinear parabolic equation with quadratic gradient terms, which arises in the modeling of an optimal portfolio in incomplete markets. The existence of weak solutions is shown by considering a sequence of approximate solutions. The main difficulty of the proof is to infer the strong convergence of the sequence. Furthermore, we prove the uniqueness of weak solutions under a smallness condition on the derivatives of the covariance matrices with respect to the solution, but without additional regularity assumptions on the solution. The results are illustrated by a numerical example.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
My work concerns two different systems of equations used in the mathematical modeling of semiconductors and plasmas: the Euler-Poisson system and the quantum drift-diffusion system. The first is given by the Euler equations for the conservation of mass and momentum, with a Poisson equation for the electrostatic potential. The second one takes into account the physical effects due to the smallness of the devices (quantum effects). It is a simple extension of the classical drift-diffusion model which consists of two continuity equations for the charge densities, with a Poisson equation for the electrostatic potential. Using an asymptotic expansion method, we study (in the steady-state case for a potential flow) the limit to zero of the three physical parameters which arise in the Euler-Poisson system: the electron mass, the relaxation time and the Debye length. For each limit, we prove the existence and uniqueness of profiles to the asymptotic expansion and some error estimates. For a vanishing electron mass or a vanishing relaxation time, this method gives us a new approach in the convergence of the Euler-Poisson system to the incompressible Euler equations. For a vanishing Debye length (also called quasineutral limit), we obtain a new approach in the existence of solutions when boundary layers can appear (i.e. when no compatibility condition is assumed). Moreover, using an iterative method, and a finite volume scheme or a penalized mixed finite volume scheme, we numerically show the smallness condition on the electron mass needed in the existence of solutions to the system, condition which has already been shown in the literature. In the quantum drift-diffusion model for the transient bipolar case in one-space dimension, we show, by using a time discretization and energy estimates, the existence of solutions (for a general doping profile). We also prove rigorously the quasineutral limit (for a vanishing doping profile). Finally, using a new time discretization and an algorithmic construction of entropies, we prove some regularity properties for the solutions of the equation obtained in the quasineutral limit (for a vanishing pressure). This new regularity permits us to prove the positivity of solutions to this equation for at least times large enough.
Resumo:
In dieser Arbeit werden Quantum-Hydrodynamische (QHD) Modelle betrachtet, die ihren Einsatz besonders in der Modellierung von Halbleiterbauteilen finden. Das QHD Modell besteht aus den Erhaltungsgleichungen für die Teilchendichte, das Momentum und die Energiedichte, inklusive der Quanten-Korrekturen durch das Bohmsche Potential. Zu Beginn wird eine Übersicht über die bekannten Ergebnisse der QHD Modelle unter Vernachlässigung von Kollisionseffekten gegeben, die aus einem Schrödinger-System für den gemischten-Zustand oder aus der Wigner-Gleichung hergeleitet werden können. Nach der Reformulierung der eindimensionalen QHD Gleichungen mit linearem Potential als stationäre Schrödinger-Gleichung werden die semianalytischen Fassungen der QHD Gleichungen für die Gleichspannungs-Kurve betrachtet. Weiterhin werden die viskosen Stabilisierungen des QHD Modells berücksichtigt, sowie die von Gardner vorgeschlagene numerische Viskosität für das {sf upwind} Finite-Differenzen Schema berechnet. Im Weiteren wird das viskose QHD Modell aus der Wigner-Gleichung mit Fokker-Planck Kollisions-Operator hergeleitet. Dieses Modell enthält die physikalische Viskosität, die durch den Kollision-Operator eingeführt wird. Die Existenz der Lösungen (mit strikt positiver Teilchendichte) für das isotherme, stationäre, eindimensionale, viskose Modell für allgemeine Daten und nichthomogene Randbedingungen wird gezeigt. Die dafür notwendigen Abschätzungen hängen von der Viskosität ab und erlauben daher den Grenzübergang zum nicht-viskosen Fall nicht. Numerische Simulationen der Resonanz-Tunneldiode modelliert mit dem nichtisothermen, stationären, eindimensionalen, viskosen QHD Modell zeigen den Einfluss der Viskosität auf die Lösung. Unter Verwendung des von Degond und Ringhofer entwickelten Quanten-Entropie-Minimierungs-Verfahren werden die allgemeinen QHD-Gleichungen aus der Wigner-Boltzmann-Gleichung mit dem BGK-Kollisions-Operator hergeleitet. Die Herleitung basiert auf der vorsichtige Entwicklung des Quanten-Maxwellians in Potenzen der skalierten Plankschen Konstante. Das so erhaltene Modell enthält auch vertex-Terme und dispersive Terme für die Geschwindigkeit. Dadurch bleibt die Gleichspannungs-Kurve für die Resonanz-Tunneldiode unter Verwendung des allgemeinen QHD Modells in einer Dimension numerisch erhalten. Die Ergebnisse zeigen, dass der dispersive Geschwindigkeits-Term die Lösung des Systems stabilisiert.
Resumo:
In this work the numerical coupling of thermal and electric network models with model equations for optoelectronic semiconductor devices is presented. Modified nodal analysis (MNA) is applied to model electric networks. Thermal effects are modeled by an accompanying thermal network. Semiconductor devices are modeled by the energy-transport model, that allows for thermal effects. The energy-transport model is expandend to a model for optoelectronic semiconductor devices. The temperature of the crystal lattice of the semiconductor devices is modeled by the heat flow eqaution. The corresponding heat source term is derived under thermodynamical and phenomenological considerations of energy fluxes. The energy-transport model is coupled directly into the network equations and the heat flow equation for the lattice temperature is coupled directly into the accompanying thermal network. The coupled thermal-electric network-device model results in a system of partial differential-algebraic equations (PDAE). Numerical examples are presented for the coupling of network- and one-dimensional semiconductor equations. Hybridized mixed finite elements are applied for the space discretization of the semiconductor equations. Backward difference formluas are applied for time discretization. Thus, positivity of charge carrier densities and continuity of the current density is guaranteed even for the coupled model.
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
This research has focused on the study of the behavior and of the collapse of masonry arch bridges. The latest decades have seen an increasing interest in this structural type, that is still present and in use, despite the passage of time and the variation of the transport means. Several strategies have been developed during the time to simulate the response of this type of structures, although even today there is no generally accepted standard one for assessment of masonry arch bridges. The aim of this thesis is to compare the principal analytical and numerical methods existing in literature on case studies, trying to highlight values and weaknesses. The methods taken in exam are mainly three: i) the Thrust Line Analysis Method; ii) the Mechanism Method; iii) the Finite Element Methods. The Thrust Line Analysis Method and the Mechanism Method are analytical methods and derived from two of the fundamental theorems of the Plastic Analysis, while the Finite Element Method is a numerical method, that uses different strategies of discretization to analyze the structure. Every method is applied to the case study through computer-based representations, that allow a friendly-use application of the principles explained. A particular closed-form approach based on an elasto-plastic material model and developed by some Belgian researchers is also studied. To compare the three methods, two different case study have been analyzed: i) a generic masonry arch bridge with a single span; ii) a real masonry arch bridge, the Clemente Bridge, built on Savio River in Cesena. In the analyses performed, all the models are two-dimensional in order to have results comparable between the different methods taken in exam. The different methods have been compared with each other in terms of collapse load and of hinge positions.
Resumo:
Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.