979 resultados para REPRESENTATION-FINITE TYPE
Resumo:
We examine Weddell Sea deep water mass distributions with respect to the results from three different model runs using the oceanic component of the National Center for Atmospheric Research Community Climate System Model (NCAR-CCSM). One run is inter-annually forced by corrected NCAR/NCEP fluxes, while the other two are forced with the annual cycle obtained from the same climatology. One of the latter runs includes an interactive sea-ice model. Optimum Multiparameter analysis is applied to separate the deep water masses in the Greenwich Meridian section (into the Weddell Sea only) to measure the degree of realism obtained in the simulations. First, we describe the distribution of the simulated deep water masses using observed water type indices. Since the observed indices do not provide an acceptable representation of the Weddell Sea deep water masses as expected, they are specifically adjusted for each simulation. Differences among the water masses` representations in the three simulations are quantified through their root-mean-square differences. Results point out the need for better representation (and inclusion) of ice-related processes in order to improve the oceanic characteristics and variability of dense Southern Ocean water masses in the outputs of the NCAR-CCSM model, and probably in other ocean and climate models.
Resumo:
AIM: To explore the biomechanical effects of the different implantation bone levels of Morse taper implants, employing a finite element analysis (FEA). METHODS: Dental implants (TitamaxCM) with 4x13 mm and 4x11 mm, and their respective abutments with 3.5 mm height, simulating a screwed premolar metal-ceramic crown, had their design performed using the software AnsysWorkbench 10.0. They were positioned in bone blocks, covered by 2.5 mm thickness of mucosa. The cortical bone was designed with 1.5 mm thickness and the trabecular bone completed the bone block. Four groups were formed: group 11CBL (11 mm implant length on cortical bone level), group 11TBL (11 mm implant length on trabecular bone level), group 13CBL (13mm implant length on cortical bone level) and group 13TBL (13 mm implant length on trabecular bone level). Oblique 200 N loads were applied. Von Mises equivalent stresses in cortical and trabecular bones were evaluated with the same design program. RESULTS: The results were shown qualitatively and quantitatively by standard scales for each type of bone. By the results obtained, it can be suggested that positioning the implant completely in trabecular bone brings harm with respect to the generated stresses. Its implantation in the cortical bone has advantages with respect to better anchoring and locking, reflecting a better dissipation of the stresses along the implant/bone interfaces. In addition, the search for anchoring the implant in its apical region in cortical bone is of great value to improve stabilization and consequently better stress distribution. CONCLUSIONS: The implant position slightly below the bone in relation to the bone crest brings advantages as the best long-term predictability with respect to the expected neck bone loss.
Resumo:
A finite-strain study in the Gran Paradiso massif of the Italian Western Alps has been carried out to elucidate whether ductile strain shows a relationship to nappe contacts and to shed light on the nature of the subhorizontal foliation typical of the gneiss nappes in the Alps. The Rf/_ and Fry methods used on feldspar porphyroclasts from 143 augengneiss and 11 conglomerate samples of the Gran Paradiso unit (upper tectonic unit of the Gran Paradiso massif), as well as, 9 augengneiss (Erfaulet granite) and 3 quartzite conglomerate samples from the underlying Erfaulet unit (lower unit of the Gran Paradiso massif), and 1 sample from mica schist. Microstructures and thermobarometric data show that feldspar ductility at temperatures >~450°C occurred only during high-pressure metamorphism, when the rocks were underplated beneath the overriding Adriatic plate. Therefore, the finite-strain data can be related to high-pressure metamorphism in the Alpine subduction zone. The augen gneiss was heterogeneously deformed and axial ratios of the strain ellipse in XZ sections range from 2.1 to 69.8. The long axes of the finite-strain ellipsoids trend W/WNW and the short axes are subvertical associated with a subhorizontal foliation. The strain magnitudes do not increase towards the nappe contacts. Geochemical work shows that the accumulation of finite strain was not associated with any significant volume strain. Hence, the data indicate flattening strain type in the Gran Paradiso unit and constrictional strain type in the Erfaulet unit and prove deviations from simple shear. In addition, electron microprobe work was undertaken to determine if the analysed fabrics formed during high-P metamorphism. The chemistry of phengites in the studied samples suggests that deformation and final structural juxtaposition of the Gran Paradiso unit against the Erfaulet took place during high-pressure metamorphism. On the other hand, nappe stacking occurred early during subduction probably by brittle imbrication and that ductile strain was superimposed on and modified the nappe structure during high-pressure underplating in the Alpine subduction zone. The accumulation of ductile strain during underplating was not by simple shear and involved a component of vertical shortening, which caused the subhorizontal foliation in the Gran Paradiso massif. It is concluded that this foliation formed during thrusting of the nappes onto each other suggesting that nappe stacking was associated with vertical shortening. The primary evidence for this interpretation is an attenuated metamorphic section with high-pressure metamorphic rocks of the Gran Paradiso unit juxtaposed against the Erfaulet unit. Therefore, the exhumation during high-pressure metamorphism in the Alpine subduction zone involved a component of vertical shortening, which is responsible for the subhorizontal foliation within the nappes.
Resumo:
In this work we are concerned with the analysis and numerical solution of Black-Scholes type equations arising in the modeling of incomplete financial markets and an inverse problem of determining the local volatility function in a generalized Black-Scholes model from observed option prices. In the first chapter a fully nonlinear Black-Scholes equation which models transaction costs arising in option pricing is discretized by a new high order compact scheme. The compact scheme is proved to be unconditionally stable and non-oscillatory and is very efficient compared to classical schemes. Moreover, it is shown that the finite difference solution converges locally uniformly to the unique viscosity solution of the continuous equation. In the next chapter we turn to the calibration problem of computing local volatility functions from market data in a generalized Black-Scholes setting. We follow an optimal control approach in a Lagrangian framework. We show the existence of a global solution and study first- and second-order optimality conditions. Furthermore, we propose an algorithm that is based on a globalized sequential quadratic programming method and a primal-dual active set strategy, and present numerical results. In the last chapter we consider a quasilinear parabolic equation with quadratic gradient terms, which arises in the modeling of an optimal portfolio in incomplete markets. The existence of weak solutions is shown by considering a sequence of approximate solutions. The main difficulty of the proof is to infer the strong convergence of the sequence. Furthermore, we prove the uniqueness of weak solutions under a smallness condition on the derivatives of the covariance matrices with respect to the solution, but without additional regularity assumptions on the solution. The results are illustrated by a numerical example.
Resumo:
In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
Die vorliegende Arbeit widmet sich der Spektraltheorie von Differentialoperatoren auf metrischen Graphen und von indefiniten Differentialoperatoren auf beschränkten Gebieten. Sie besteht aus zwei Teilen. Im Ersten werden endliche, nicht notwendigerweise kompakte, metrische Graphen und die Hilberträume von quadratintegrierbaren Funktionen auf diesen betrachtet. Alle quasi-m-akkretiven Laplaceoperatoren auf solchen Graphen werden charakterisiert, und Abschätzungen an die negativen Eigenwerte selbstadjungierter Laplaceoperatoren werden hergeleitet. Weiterhin wird die Wohlgestelltheit eines gemischten Diffusions- und Transportproblems auf kompakten Graphen durch die Anwendung von Halbgruppenmethoden untersucht. Eine Verallgemeinerung des indefiniten Operators $-tfrac{d}{dx}sgn(x)tfrac{d}{dx}$ von Intervallen auf metrische Graphen wird eingeführt. Die Spektral- und Streutheorie der selbstadjungierten Realisierungen wird detailliert besprochen. Im zweiten Teil der Arbeit werden Operatoren untersucht, die mit indefiniten Formen der Art $langlegrad v, A(cdot)grad urangle$ mit $u,vin H_0^1(Omega)subset L^2(Omega)$ und $OmegasubsetR^d$ beschränkt, assoziiert sind. Das Eigenwertverhalten entspricht in Dimension $d=1$ einer verallgemeinerten Weylschen Asymptotik und für $dgeq 2$ werden Abschätzungen an die Eigenwerte bewiesen. Die Frage, wann indefinite Formmethoden für Dimensionen $dgeq 2$ anwendbar sind, bleibt offen und wird diskutiert.
Resumo:
Die Flachwassergleichungen (SWE) sind ein hyperbolisches System von Bilanzgleichungen, die adäquate Approximationen an groß-skalige Strömungen der Ozeane, Flüsse und der Atmosphäre liefern. Dabei werden Masse und Impuls erhalten. Wir unterscheiden zwei charakteristische Geschwindigkeiten: die Advektionsgeschwindigkeit, d.h. die Geschwindigkeit des Massentransports, und die Geschwindigkeit von Schwerewellen, d.h. die Geschwindigkeit der Oberflächenwellen, die Energie und Impuls tragen. Die Froude-Zahl ist eine Kennzahl und ist durch das Verhältnis der Referenzadvektionsgeschwindigkeit zu der Referenzgeschwindigkeit der Schwerewellen gegeben. Für die oben genannten Anwendungen ist sie typischerweise sehr klein, z.B. 0.01. Zeit-explizite Finite-Volume-Verfahren werden am öftersten zur numerischen Berechnung hyperbolischer Bilanzgleichungen benutzt. Daher muss die CFL-Stabilitätsbedingung eingehalten werden und das Zeitinkrement ist ungefähr proportional zu der Froude-Zahl. Deswegen entsteht bei kleinen Froude-Zahlen, etwa kleiner als 0.2, ein hoher Rechenaufwand. Ferner sind die numerischen Lösungen dissipativ. Es ist allgemein bekannt, dass die Lösungen der SWE gegen die Lösungen der Seegleichungen/ Froude-Zahl Null SWE für Froude-Zahl gegen Null konvergieren, falls adäquate Bedingungen erfüllt sind. In diesem Grenzwertprozess ändern die Gleichungen ihren Typ von hyperbolisch zu hyperbolisch.-elliptisch. Ferner kann bei kleinen Froude-Zahlen die Konvergenzordnung sinken oder das numerische Verfahren zusammenbrechen. Insbesondere wurde bei zeit-expliziten Verfahren falsches asymptotisches Verhalten (bzgl. der Froude-Zahl) beobachtet, das diese Effekte verursachen könnte.Ozeanographische und atmosphärische Strömungen sind typischerweise kleine Störungen eines unterliegenden Equilibriumzustandes. Wir möchten, dass numerische Verfahren für Bilanzgleichungen gewisse Equilibriumzustände exakt erhalten, sonst können künstliche Strömungen vom Verfahren erzeugt werden. Daher ist die Quelltermapproximation essentiell. Numerische Verfahren die Equilibriumzustände erhalten heißen ausbalanciert.rnrnIn der vorliegenden Arbeit spalten wir die SWE in einen steifen, linearen und einen nicht-steifen Teil, um die starke Einschränkung der Zeitschritte durch die CFL-Bedingung zu umgehen. Der steife Teil wird implizit und der nicht-steife explizit approximiert. Dazu verwenden wir IMEX (implicit-explicit) Runge-Kutta und IMEX Mehrschritt-Zeitdiskretisierungen. Die Raumdiskretisierung erfolgt mittels der Finite-Volumen-Methode. Der steife Teil wird mit Hilfe von finiter Differenzen oder au eine acht mehrdimensional Art und Weise approximniert. Zur mehrdimensionalen Approximation verwenden wir approximative Evolutionsoperatoren, die alle unendlich viele Informationsausbreitungsrichtungen berücksichtigen. Die expliziten Terme werden mit gewöhnlichen numerischen Flüssen approximiert. Daher erhalten wir eine Stabilitätsbedingung analog zu einer rein advektiven Strömung, d.h. das Zeitinkrement vergrößert um den Faktor Kehrwert der Froude-Zahl. Die in dieser Arbeit hergeleiteten Verfahren sind asymptotisch erhaltend und ausbalanciert. Die asymptotischer Erhaltung stellt sicher, dass numerische Lösung das "korrekte" asymptotische Verhalten bezüglich kleiner Froude-Zahlen besitzt. Wir präsentieren Verfahren erster und zweiter Ordnung. Numerische Resultate bestätigen die Konvergenzordnung, so wie Stabilität, Ausbalanciertheit und die asymptotische Erhaltung. Insbesondere beobachten wir bei machen Verfahren, dass die Konvergenzordnung fast unabhängig von der Froude-Zahl ist.
Resumo:
Groundwater represents one of the most important resources of the world and it is essential to prevent its pollution and to consider remediation intervention in case of contamination. According to the scientific community the characterization and the management of the contaminated sites have to be performed in terms of contaminant fluxes and considering their spatial and temporal evolution. One of the most suitable approach to determine the spatial distribution of pollutant and to quantify contaminant fluxes in groundwater is using control panels. The determination of contaminant mass flux, requires measurement of contaminant concentration in the moving phase (water) and velocity/flux of the groundwater. In this Master Thesis a new solute flux mass measurement approach, based on an integrated control panel type methodology combined with the Finite Volume Point Dilution Method (FVPDM), for the monitoring of transient groundwater fluxes, is proposed. Moreover a new adsorption passive sampler, which allow to capture the variation of solute concentration with time, is designed. The present work contributes to the development of this approach on three key points. First, the ability of the FVPDM to monitor transient groundwater fluxes was verified during a step drawdown test at the experimental site of Hermalle Sous Argentau (Belgium). The results showed that this method can be used, with optimal results, to follow transient groundwater fluxes. Moreover, it resulted that performing FVPDM, in several piezometers, during a pumping test allows to determine the different flow rates and flow regimes that can occurs in the various parts of an aquifer. The second field test aiming to determine the representativity of a control panel for measuring mass flus in groundwater underlined that wrong evaluations of Darcy fluxes and discharge surfaces can determine an incorrect estimation of mass fluxes and that this technique has to be used with precaution. Thus, a detailed geological and hydrogeological characterization must be conducted, before applying this technique. Finally, the third outcome of this work concerned laboratory experiments. The test conducted on several type of adsorption material (Oasis HLB cartridge, TDS-ORGANOSORB 10 and TDS-ORGANOSORB 10-AA), in order to determine the optimum medium to dimension the passive sampler, highlighted the necessity to find a material with a reversible adsorption tendency to completely satisfy the request of the new passive sampling technique.
Resumo:
The interest in automatic volume meshing for finite element analysis (FEA) has grown more since the appearance of microfocus CT (μCT), due to its high resolution, which allows for the assessment of mechanical behaviour at a high precision. Nevertheless, the basic meshing approach of generating one hexahedron per voxel produces jagged edges. To prevent this effect, smoothing algorithms have been introduced to enhance the topology of the mesh. However, whether smoothing also improves the accuracy of voxel-based meshes in clinical applications is still under question. There is a trade-off between smoothing and quality of elements in the mesh. Distorted elements may be produced by excessive smoothing and reduce accuracy of the mesh. In the present work, influence of smoothing on the accuracy of voxel-based meshes in micro-FE was assessed. An accurate 3D model of a trabecular structure with known apparent mechanical properties was used as a reference model. Virtual CT scans of this reference model (with resolutions of 16, 32 and 64 μm) were then created and used to build voxel-based meshes of the microarchitecture. Effects of smoothing on the apparent mechanical properties of the voxel-based meshes as compared to the reference model were evaluated. Apparent Young’s moduli of the smooth voxel-based mesh were significantly closer to those of the reference model for the 16 and 32 μm resolutions. Improvements were not significant for the 64 μm, due to loss of trabecular connectivity in the model. This study shows that smoothing offers a real benefit to voxel-based meshes used in micro-FE. It might also broaden voxel-based meshing to other biomechanical domains where it was not used previously due to lack of accuracy. As an example, this work will be used in the framework of the European project ContraCancrum, which aims at providing a patient-specific simulation of tumour development in brain and lungs for oncologists. For this type of clinical application, such a fast, automatic, and accurate generation of the mesh is of great benefit.
Resumo:
Identifying and comparing different steady states is an important task for clinical decision making. Data from unequal sources, comprising diverse patient status information, have to be interpreted. In order to compare results an expressive representation is the key. In this contribution we suggest a criterion to calculate a context-sensitive value based on variance analysis and discuss its advantages and limitations referring to a clinical data example obtained during anesthesia. Different drug plasma target levels of the anesthetic propofol were preset to reach and maintain clinically desirable steady state conditions with target controlled infusion (TCI). At the same time systolic blood pressure was monitored, depth of anesthesia was recorded using the bispectral index (BIS) and propofol plasma concentrations were determined in venous blood samples. The presented analysis of variance (ANOVA) is used to quantify how accurately steady states can be monitored and compared using the three methods of measurement.
Resumo:
When considering NLO corrections to thermal particle production in the “relativistic” regime, in which the invariant mass squared of the produced particle is K2 ~ (πT)2, then the production rate can be expressed as a sum of a few universal “master” spectral functions. Taking the most complicated 2-loop master as an example, a general strategy for obtaining a convergent 2-dimensional integral representation is suggested. The analysis applies both to bosonic and fermionic statistics, and shows that for this master the non-relativistic approximation is only accurate for K2 ~(8πT)2, whereas the zero-momentum approximation works surprisingly well. Once the simpler masters have been similarly resolved, NLO results for quantities such as the right-handed neutrino production rate from a Standard Model plasma or the dilepton production rate from a QCD plasma can be assembled for K2 ~ (πT)2.
Resumo:
BACKGROUND Aortic dissection is a severe pathological condition in which blood penetrates between layers of the aortic wall and creates a duplicate channel - the false lumen. This considerable change on the aortic morphology alters hemodynamic features dramatically and, in the case of rupture, induces markedly high rates of morbidity and mortality. METHODS In this study, we establish a patient-specific computational model and simulate the pulsatile blood flow within the dissected aorta. The k-ω SST turbulence model is employed to represent the flow and finite volume method is applied for numerical solutions. Our emphasis is on flow exchange between true and false lumen during the cardiac cycle and on quantifying the flow across specific passages. Loading distributions including pressure and wall shear stress have also been investigated and results of direct simulations are compared with solutions employing appropriate turbulence models. RESULTS Our results indicate that (i) high velocities occur at the periphery of the entries; (ii) for the case studied, approximately 40% of the blood flow passes the false lumen during a heartbeat cycle; (iii) higher pressures are found at the outer wall of the dissection, which may induce further dilation of the pseudo-lumen; (iv) highest wall shear stresses occur around the entries, perhaps indicating the vulnerability of this region to further splitting; and (v) laminar simulations with adequately fine mesh resolutions, especially refined near the walls, can capture similar flow patterns to the (coarser mesh) turbulent results, although the absolute magnitudes computed are in general smaller. CONCLUSIONS The patient-specific model of aortic dissection provides detailed flow information of blood transport within the true and false lumen and quantifies the loading distributions over the aorta and dissection walls. This contributes to evaluating potential thrombotic behavior in the false lumen and is pivotal in guiding endovascular intervention. Moreover, as a computational study, mesh requirements to successfully evaluate the hemodynamic parameters have been proposed.
Resumo:
BACKGROUND Medial open wedge high tibial osteotomy is a well-established procedure for the treatment of unicompartmental osteoarthritis and symptomatic varus malalignment. We hypothesized that different fixation devices generate different fixation stability profiles for the various wedge sizes in a finite element (FE) analysis. METHODS Four types of fixation were compared: 1) first and 2) second generation Puddu plates, and 3) TomoFix plate with and 4) without bone graft. Cortical and cancellous bone was modelled and five different opening wedge sizes were studied for each model. Outcome measures included: 1) stresses in bone, 2) relative displacement of the proximal and distal tibial fragments, 3) stresses in the plates, 4) stresses on the upper and lower screw surfaces in the screw channels. RESULTS The highest load for all fixation types occurred in the plate axis. For the vast majority of the wedge sizes and fixation types the shear stress (von Mises stress) was dominating in the bone independent of fixation type. The relative displacements of the tibial fragments were low (in μm range). With an increasing wedge size this displacement tended to increase for both Puddu plates and the TomoFix plate with bone graft. For the TomoFix plate without bone graft a rather opposite trend was observed.For all fixation types the occurring stresses at the screw-bone contact areas pulled at the screws and exceeded the allowable threshold of 1.2 MPa for at least one screw surface. Of the six screw surfaces that were studied, the TomoFix plate with bone graft showed a stress excess of one out of twelve and without bone graft, five out of twelve. With the Puddu plates, an excess stress occurred in the majority of screw surfaces. CONCLUSIONS The different fixation devices generate different fixation stability profiles for different opening wedge sizes. Based on the computational simulations, none of the studied osteosynthesis fixation types warranted an intransigent full weight bearing per se. The highest fixation stability was observed for the TomoFix plates and the lowest for the first generation Puddu plate. These findings were revealed in theoretical models and need to be validated in controlled clinical settings.