959 resultados para Finite Volume Methods
Resumo:
* This work has been supported by the Office of Naval Research Contract Nr. N0014-91-J1343, the Army Research Office Contract Nr. DAAD 19-02-1-0028, the National Science Foundation grants DMS-0221642 and DMS-0200665, the Deutsche Forschungsgemeinschaft grant SFB 401, the IHP Network “Breaking Complexity” funded by the European Commission and the Alexan- der von Humboldt Foundation.
Resumo:
Background/aims - To determine which biometric parameters provide optimum predictive power for ocular volume. Methods - Sixty-seven adult subjects were scanned with a Siemens 3-T MRI scanner. Mean spherical error (MSE) (D) was measured with a Shin-Nippon autorefractor and a Zeiss IOLMaster used to measure (mm) axial length (AL), anterior chamber depth (ACD) and corneal radius (CR). Total ocular volume (TOV) was calculated from T2-weighted MRIs (voxel size 1.0 mm3) using an automatic voxel counting and shading algorithm. Each MR slice was subsequently edited manually in the axial, sagittal and coronal plane, the latter enabling location of the posterior pole of the crystalline lens and partitioning of TOV into anterior (AV) and posterior volume (PV) regions. Results - Mean values (±SD) for MSE (D), AL (mm), ACD (mm) and CR (mm) were −2.62±3.83, 24.51±1.47, 3.55±0.34 and 7.75±0.28, respectively. Mean values (±SD) for TOV, AV and PV (mm3) were 8168.21±1141.86, 1099.40±139.24 and 7068.82±1134.05, respectively. TOV showed significant correlation with MSE, AL, PV (all p<0.001), CR (p=0.043) and ACD (p=0.024). Bar CR, the correlations were shown to be wholly attributable to variation in PV. Multiple linear regression indicated that the combination of AL and CR provided optimum R2 values of 79.4% for TOV. Conclusion - Clinically useful estimations of ocular volume can be obtained from measurement of AL and CR.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
In many areas of simulation, a crucial component for efficient numerical computations is the use of solution-driven adaptive features: locally adapted meshing or re-meshing; dynamically changing computational tasks. The full advantages of high performance computing (HPC) technology will thus only be able to be exploited when efficient parallel adaptive solvers can be realised. The resulting requirement for HPC software is for dynamic load balancing, which for many mesh-based applications means dynamic mesh re-partitioning. The DRAMA project has been initiated to address this issue, with a particular focus being the requirements of industrial Finite Element codes, but codes using Finite Volume formulations will also be able to make use of the project results.
Resumo:
In this article we consider the development of discontinuous Galerkin finite element methods for the numerical approximation of the compressible Navier-Stokes equations. For the discretization of the leading order terms, we propose employing the generalization of the symmetric version of the interior penalty method, originally developed for the numerical approximation of linear self-adjoint second-order elliptic partial differential equations. In order to solve the resulting system of nonlinear equations, we exploit a (damped) Newton-GMRES algorithm. Numerical experiments demonstrating the practical performance of the proposed discontinuous Galerkin method with higher-order polynomials are presented.
Resumo:
This work is concerned with the design and analysis of hp-version discontinuous Galerkin (DG) finite element methods for boundary-value problems involving the biharmonic operator. The first part extends the unified approach of Arnold, Brezzi, Cockburn & Marini (SIAM J. Numer. Anal. 39, 5 (2001/02), 1749-1779) developed for the Poisson problem, to the design of DG methods via an appropriate choice of numerical flux functions for fourth order problems; as an example we retrieve the interior penalty DG method developed by Suli & Mozolevski (Comput. Methods Appl. Mech. Engrg. 196, 13-16 (2007), 1851-1863). The second part of this work is concerned with a new a-priori error analysis of the hp-version interior penalty DG method, when the error is measured in terms of both the energy-norm and L2-norm, as well certain linear functionals of the solution, for elemental polynomial degrees $p\ge 2$. Also, provided that the solution is piecewise analytic in an open neighbourhood of each element, exponential convergence is also proven for the p-version of the DG method. The sharpness of the theoretical developments is illustrated by numerical experiments.
Resumo:
Este trabajo tiene como objetivo la mejora en la validación de la simulación numérica del flujo bifásico característico del transporte de lecho fluido, mediante la formulación y desarrollo de un modelo numérico combinado Volúmenes Finitos - Elementos Finitos. Para ello se simula numéricamente el flujo de mezcla sólido-gas en una Cámara de Lecho Fluido, bajo implementación en código COMSOL, cuyos resultados son mejores comparativamente a un modelo basado en el método de Elementos Discretos implementado en código abierto MFIX. El problema fundamental de la modelización matemática del fenómeno de lecho fluido es la irregularidad del dominio, el acoplamiento de las variables en espacio y tiempo y, la no linealidad. En esta investigación se reformula apropiadamente las ecuaciones conservativas del fenómeno, tales que permitan obtener un problema variacional equivalente y solucionable numéricamente. Entonces; se define una ecuación de estado en función de la presión hidrodinámica y la fracción volumétrica de sólidos, quedando desacoplado el sistema en tres sub-problemas, garantizando así la existencia de solución del problema general. Una vez aproximados numéricamente ambos modelos, se comparan los resultados de donde se observa que el modelo materia del presente artículo, verifica de forma más eficaz las condiciones de mezcla óptima, reflejada en la calidad del burbujeo y velocidad de mezcla.
Resumo:
A modeling study was completed to develop a methodology that combines the sequencing and finite difference methods for the simulation of a heterogeneous model of a tubular reactor applied in the treatment of wastewater. The system included a liquid phase (convection diffusion transport) and a solid phase (diffusion reaction) that was obtained by completing a mass balance in the reactor and in the particle, respectively. The model was solved using a pilot-scale horizontal-flow anaerobic immobilized biomass (HAIB) reactor to treat domestic sewage, with the concentration results compared with the experimental data. A comparison of the behavior of the liquid phase concentration profile and the experimental results indicated that both the numerical methods offer a good description of the behavior of the concentration along the reactor. The advantage of the sequencing method over the finite difference method is that it is easier to apply and requires less computational time to model the dynamic simulation of outlet response of HAIB.
Resumo:
This paper presents results on a verification test of a Direct Numerical Simulation code of mixed high-order of accuracy using the method of manufactured solutions (MMS). This test is based on the formulation of an analytical solution for the Navier-Stokes equations modified by the addition of a source term. The present numerical code was aimed at simulating the temporal evolution of instability waves in a plane Poiseuille flow. The governing equations were solved in a vorticity-velocity formulation for a two-dimensional incompressible flow. The code employed two different numerical schemes. One used mixed high-order compact and non-compact finite-differences from fourth-order to sixth-order of accuracy. The other scheme used spectral methods instead of finite-difference methods for the streamwise direction, which was periodic. In the present test, particular attention was paid to the boundary conditions of the physical problem of interest. Indeed, the verification procedure using MMS can be more demanding than the often used comparison with Linear Stability Theory. That is particularly because in the latter test no attention is paid to the nonlinear terms. For the present verification test, it was possible to manufacture an analytical solution that reproduced some aspects of an instability wave in a nonlinear stage. Although the results of the verification by MMS for this mixed-order numerical scheme had to be interpreted with care, the test was very useful as it gave confidence that the code was free of programming errors. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement.
Resumo:
The applicability of a meshfree approximation method, namely the EFG method, on fully geometrically exact analysis of plates is investigated. Based on a unified nonlinear theory of plates, which allows for arbitrarily large rotations and displacements, a Galerkin approximation via MLS functions is settled. A hybrid method of analysis is proposed, where the solution is obtained by the independent approximation of the generalized internal displacement fields and the generalized boundary tractions. A consistent linearization procedure is performed, resulting in a semi-definite generalized tangent stiffness matrix which, for hyperelastic materials and conservative loadings, is always symmetric (even for configurations far from the generalized equilibrium trajectory). Besides the total Lagrangian formulation, an updated version is also presented, which enables the treatment of rotations beyond the parameterization limit. An extension of the arc-length method that includes the generalized domain displacement fields, the generalized boundary tractions and the load parameter in the constraint equation of the hyper-ellipsis is proposed to solve the resulting nonlinear problem. Extending the hybrid-displacement formulation, a multi-region decomposition is proposed to handle complex geometries. A criterium for the classification of the equilibrium`s stability, based on the Bordered-Hessian matrix analysis, is suggested. Several numerical examples are presented, illustrating the effectiveness of the method. Differently from the standard finite element methods (FEM), the resulting solutions are (arbitrary) smooth generalized displacement and stress fields. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The objective of the present work is to propose a numerical and statistical approach, using computational fluid dynamics, for the study of the atmospheric pollutant dispersion. Modifications in the standard k-epsilon turbulence model and additional equations for the calculation of the variance of concentration are introduced to enhance the prediction of the flow field and scalar quantities. The flow field, the mean concentration and the variance of a flow over a two-dimensional triangular hill, with a finite-size point pollutant source, are calculated by a finite volume code and compared with published experimental results. A modified low Reynolds k-epsilon turbulence model was employed in this work, using the constant of the k-epsilon model C(mu)=0.03 to take into account the inactive atmospheric turbulence. The numerical results for the velocity profiles and the position of the reattachment point are in good agreement with the experimental results. The results for the mean and the variance of the concentration are also in good agreement with experimental results from the literature. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Load cells are used extensively in engineering fields. This paper describes a novel structural optimization method for single- and multi-axis load cell structures. First, we briefly explain the topology optimization method that uses the solid isotropic material with penalization (SIMP) method. Next, we clarify the mechanical requirements and design specifications of the single- and multi-axis load cell structures, which are formulated as an objective function. In the case of multi-axis load cell structures, a methodology based on singular value decomposition is used. The sensitivities of the objective function with respect to the design variables are then formulated. On the basis of these formulations, an optimization algorithm is constructed using finite element methods and the method of moving asymptotes (MMA). Finally, we examine the characteristics of the optimization formulations and the resultant optimal configurations. We confirm the usefulness of our proposed methodology for the optimization of single- and multi-axis load cell structures.
Resumo:
In the development of a ventricular assist device, computational fluid dynamics (CFD) analysis is an efficient tool to obtain the best design before making the final prototype. In this study, different designs of a centrifugal blood pump were developed to investigate flow characteristics and performance. This study assumed the blood flow as being an incompressible homogeneous Newtonian fluid. A constant velocity was applied at the inlet; no slip boundary conditions were applied at device wall; and pressure boundary conditions were applied at the outlet. The CFD code used in this work was based on the finite volume method. In the future, the results of CFD analysis can be compared with flow visualization and hemolysis tests.
Resumo:
Objectives - A highly adaptive aspect of human memory is the enhancement of explicit, consciously accessible memory by emotional stimuli. We studied the performance of Alzheimer`s disease (AD) patients and elderly controls using a memory battery with emotional content, and we correlated these results with the amygdala and hippocampus volume. Methods - Twenty controls and 20 early AD patients were subjected to the International Affective Picture System (IAPS) and to magnetic resonance imaging-based volumetric measurements of the medial temporal lobe structures. Results - The results show that excluding control group subjects with 5 or more years of schooling, both groups showed improvement with pleasant or unpleasant figures for the IAPS in an immediate free recall test. Likewise, in a delayed free recall test, both the controls and the AD group showed improvement for pleasant pictures, when education factor was not controlled. The AD group showed improvement in the immediate and delayed free recall test proportional to the medial temporal lobe structures, with no significant clinical correlation between affective valence and amygdala volume. Conclusion - AD patients can correctly identify emotions, at least at this early stage, but this does not improve their memory performance.