32 resultados para 3D numerical modeling
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
Stratum corneum (SC) desorption experiments have yielded higher calculated steady-state fluxes than those obtained by epidermal penetration studies. A possible explanation of this result is a variable diffusion or partition coefficient across the SC. We therefore developed the diffusion model for percutaneous penetration and desorption to study the effects of either a variable diffusion coefficient or variable partition coefficient in the SC over the diffusion path length. Steady-state flux, lag time, and mean desorption time were obtained from Laplace domain solutions. Numerical inversion of the Laplace domain solutions was used for simulations of solute concentration-distance and amount penetrated (desorbed)-time profiles. Diffusion and partition coefficients heterogeneity were examined using six different models. The effect of heterogeneity on predicted flux from desorption studies was compared with that obtained in permeation studies. Partition coefficient heterogeneity had a more profound effect on predicted fluxes than diffusion coefficient heterogeneity. Concentration-distance profiles show even larger dependence on heterogeneity, which is consistent with experimental tape-stripping data reported for clobetasol propionate and other solutes. The clobetasol propionate tape-stripping data were most consistent with the partition coefficient decreasing exponentially for half the SC and then becoming a constant for the remaining SC. (C) 2004 Wiley-Liss, Inc.
Resumo:
Computer-aided tomography has been used for many years to provide significant information about the internal properties of an object, particularly in the medical fraternity. By reconstructing one-dimensional (ID) X-ray images, 2D cross-sections and 3D renders can provide a wealth of information about an object's internal structure. An extension of the methodology is reported here to enable the characterization of a model agglomerate structure. It is demonstrated that methods based on X-ray microtomography offer considerable potential in the validation and utilization of distinct element method simulations also examined.
Resumo:
We explore both the rheology and complex flow behavior of monodisperse polymer melts. Adequate quantities of monodisperse polymer were synthesized in order that both the materials rheology and microprocessing behavior could be established. In parallel, we employ a molecular theory for the polymer rheology that is suitable for comparison with experimental rheometric data and numerical simulation for microprocessing flows. The model is capable of matching both shear and extensional data with minimal parameter fitting. Experimental data for the processing behavior of monodisperse polymers are presented for the first time as flow birefringence and pressure difference data obtained using a Multipass Rheometer with an 11:1 constriction entry and exit flow. Matching of experimental processing data was obtained using the constitutive equation with the Lagrangian numerical solver, FLOWSOLVE. The results show the direct coupling between molecular constitutive response and macroscopic processing behavior, and differentiate flow effects that arise separately from orientation and stretch. (c) 2005 The Society of Rheology.
Resumo:
A critical assessment is presented for the existing fluid flow models used for dense medium cyclones (DMCs) and hydrocyclones. As the present discussion indicates, the understanding of dense medium cyclone flow is still far from the complete. However, its similarity to the hydrocyclone provides a basis for improved understanding of fluid flow in DMCs. The complexity of fluid flow in DMCs is basically due to the existence of medium as well as the dominance of turbulent particle size and density effects on separation. Both the theoretical and experimental analysis is done with respect to two-phase motions and solid phase flow in hydrocyclones or DMCs. A detailed discussion is presented on the empirical, semiempirical, and the numerical models based upon both the vorticity-stream function approach and Navier-Stokes equations in their primitive variables and in cylindrical coordinates available in literature. The existing equations describing turbulence and multiphase flows in cyclone are also critically reviewed.
Resumo:
Due to complex field/tissue interactions, high-field magnetic resonance (MR) images suffer significant image distortions that result in compromised diagnostic quality. A new method that attempts to remove these distortions is proposed in this paper and is based on the use of transceiver-phased arrays. The proposed system uses, in the examples presented herein, a shielded four-element transceive-phased array head coil and involves performing two separate scans of the same slice with each scan using different excitations during transmission. By optimizing the amplitudes and phases for each scan, antipodal signal profiles can be obtained, and by combining both the images together, the image distortion can be reduced several fold. A combined hybrid method of moments (MoM)/finite element method (FEM) and finite-difference time-domain (FDTD) technique is proposed and used to elucidate the concept of the new method and to accurately evaluate the electromagnetic field (EMF) in a human head model. In addition, the proposed method is used in conjunction with the generalized auto-calibrating partially parallel acquisitions (GRAPPA) reconstruction technique to enable rapid imaging of the two scans. Simulation results reported herein for 11-T (470-MHz) brain imaging applications show that the new method with GRAPPA reconstruction theoretically results in improved image quality and that the proposed combined hybrid MoM/FEM and FDTD technique is. suitable for high-field magnetic resonance imaging (MRI) numerical analysis.
Resumo:
In modern magnetic resonance imaging (MRI), both patients and radiologists are exposed to strong, nonuniform static magnetic fields inside or outside of the scanner, in which the body movement may be able to induce electric currents in tissues which could be possibly harmful. This paper presents theoretical investigations into the spatial distribution of induced E-fields in the human model when moving at various positions around the magnet. The numerical calculations are based on an efficient, quasistatic, finite-difference scheme and an anatomically realistic, full-body, male model. 3D field profiles from an actively-shielded 4 T magnet system are used and the body model projected through the field profile with normalized velocity. The simulation shows that it is possible to induce E-fields/currents near the level of physiological significance under some circumstances and provides insight into the spatial characteristics of the induced fields. The results are easy to extrapolate to very high field strengths for the safety evaluation at a variety of field strengths and motion velocities.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
A kinetic theory based Navier-Stokes solver has been implemented on a parallel supercomputer (Intel iPSC Touchstone Delta) to study the leeward flowfield of a blunt nosed delta wing at 30-deg incidence at hypersonic speeds (similar to the proposed HERMES aerospace plane). Computational results are presented for a series of grids for both inviscid and laminar viscous flows at Reynolds numbers of 225,000 and 2.25 million. In addition, comparisons are made between the present and two independent calculations of the some flows (by L. LeToullec and P. Guillen, and S. Menne) which were presented at the Workshop on Hypersonic Flows for Re-entry Problems, Antibes, France, 1991.
Resumo:
Comparisons are made between experimental measurements and numerical simulations of ionizing flows generated in a superorbital facility. Nitrogen, with a freestream velocity of around 10 km/s, was passed over a cylindrical model, and images were recorded using two-wavelength holographic interferometry. The resulting density, electron concentration, and temperature maps were compared with numerical simulations from the Langley Research Center aerothermodynamic upwind relaxation algorithm. The results showed generally good agreement in shock location and density distributions. Some discrepancies were observed for the electron concentration, possibly, because simulations were of a two-dimensional flow, whereas the experiments were likely to have small three-dimensional effects.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
Business process design is primarily driven by process improvement objectives. However, the role of control objectives stemming from regulations and standards is becoming increasingly important for businesses in light of recent events that led to some of the largest scandals in corporate history. As organizations strive to meet compliance agendas, there is an evident need to provide systematic approaches that assist in the understanding of the interplay between (often conflicting) business and control objectives during business process design. In this paper, our objective is twofold. We will firstly present a research agenda in the space of business process compliance, identifying major technical and organizational challenges. We then tackle a part of the overall problem space, which deals with the effective modeling of control objectives and subsequently their propagation onto business process models. Control objective modeling is proposed through a specialized modal logic based on normative systems theory, and the visualization of control objectives on business process models is achieved procedurally. The proposed approach is demonstrated in the context of a purchase-to-pay scenario.
Resumo:
Ex vivo hematopoiesis is increasingly used for clinical applications. Models of ex vivo hematopoiesis are required to better understand the complex dynamics and to optimize hematopoietic culture processes. A general mathematical modeling framework is developed which uses traditional chemical engineering metaphors to describe the complex hematopoietic dynamics. Tanks and tubular reactors are used to describe the (pseudo-) stochastic and deterministic elements of hematopoiesis, respectively. Cells at any point in the differentiation process can belong to either an immobilized, inert phase (quiescent cells) or a mobile, active phase (cycling cells). The model describes five processes: (1) flow (differentiation), (2) autocatalytic formation (growth),(3) degradation (death), (4) phase transition from immobilized to mobile phase (quiescent to cycling transition), and (5) phase transition from mobile to immobilized phase (cycling to quiescent transition). The modeling framework is illustrated with an example concerning the effect of TGF-beta 1 on erythropoiesis. (C) 1998 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
A new model proposed for the gasification of chars and carbons incorporates features of the turbostratic nanoscale structure that exists in such materials. The model also considers the effect of initial surface chemistry and different reactivities perpendicular to the edges and to the faces of the underlying crystallite planes comprising the turbostratic structure. It may be more realistic than earlier models based on pore or grain structure idealizations when the carbon contains large amounts of crystallite matter. Shrinkage of the carbon particles in the chemically controlled regime is also possible due to the random complete gasification of crystallitic planes. This mechanism can explain observations in the literature of particle size reduction. Based on the model predictions, both initial surface chemistry and the number of stacked planes in the crystallites strongly influence the reactivity and particle shrinkage. Its test results agree well with literature data on the air-oxidation of Spherocarb and show that it accurately predicts the variation of particle size with conversion. Model parameters are determined entirely from rate measurements.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.