958 resultados para Simulation-Numerical
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
In this work the problem of performing a numerical simulation of quasi-static crack propagation within an adhesive layer of a bonded joint under Mode I loading affected by stress field changes due to thermal-chemical shrinkage induced by cure process is addressed. Secondly, a parametric study on fracture critical energy, cohesive strength and Young's modulus is performed. Finally, a particular case of adhesive layer stiffening is simulated in order to verify qualitatively the major effect.
Resumo:
Nowadays computer simulation is used in various fields, particularly in laboratories where it is used for the exploration data which are sometimes experimentally inaccessible. In less developed countries where there is a need for up to date laboratories for the realization of practical lessons in chemistry, especially in secondary schools and some higher institutions of learning, it may permit learners to carryout experiments such as titrations without the use of laboratory materials and equipments. Computer simulations may also permit teachers to better explain the realities of practical lessons, given that computers have now become very accessible and less expensive compared to the acquisition of laboratory materials and equipments. This work is aimed at coming out with a virtual laboratory that shall permit the simulation of an acid-base titration and an oxidation-reduction titration with the use of synthetic images. To this effect, an appropriate numerical method was used to obtain appropriate organigram, which were further transcribed into source codes with the help of a programming language so as to come out with the software.
Resumo:
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model, normal base measures and Gibbs sampling procedures based on the Pólya urn scheme are often used to simulate posterior draws. These algorithms are applicable in the conjugate case when (for a normal base measure) the likelihood is normal. In the non-conjugate case, the algorithms proposed by MacEachern and Müller (1998) and Neal (2000) are often applied to generate posterior samples. Some common problems associated with simulation algorithms for non-conjugate MDP models include convergence and mixing difficulties. This paper proposes an algorithm based on the Pólya urn scheme that extends the Gibbs sampling algorithms to non-conjugate models with normal base measures and exponential family likelihoods. The algorithm proceeds by making Laplace approximations to the likelihood function, thereby reducing the procedure to that of conjugate normal MDP models. To ensure the validity of the stationary distribution in the non-conjugate case, the proposals are accepted or rejected by a Metropolis-Hastings step. In the special case where the data are normally distributed, the algorithm is identical to the Gibbs sampler.
Resumo:
A Reynolds-Stress Turbulence Model has been incorporated with success into the KIVA code, a computational fluid dynamics hydrocode for three-dimensional simulation of fluid flow in engines. The newly implemented Reynolds-stress turbulence model greatly improves the robustness of KIVA, which in its original version has only eddy-viscosity turbulence models. Validation of the Reynolds-stress turbulence model is accomplished by conducting pipe-flow and channel-flow simulations, and comparing the computed results with experimental and direct numerical simulation data. Flows in engines of various geometry and operating conditions are calculated using the model, to study the complex flow fields as well as confirm the model’s validity. Results show that the Reynolds-stress turbulence model is able to resolve flow details such as swirl and recirculation bubbles. The model is proven to be an appropriate choice for engine simulations, with consistency and robustness, while requiring relatively low computational effort.
Resumo:
A method for the introduction of strong discontinuities into a mesh will be developed. This method, applicable to a number of eXtended Finite Element Methods (XFEM) with intra-element strong discontinuities will be demonstrated with one specific method: the Generalized Cohesive Element (GCE) method. The algorithm utilizes a subgraph mesh representation which may insert the GCE either adaptively during the course of the analysis or a priori. Using this subgraphing algorithm, the insertion time is O(n) to the number of insertions. Numerical examples are presented demonstrating the advantages of the subgraph insertion method.
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
This technical report discusses the application of Lattice Boltzmann Method (LBM) in the fluid flow simulation through porous filter-wall of disordered media. The diesel particulate filter (DPF) is an example of disordered media. DPF is developed as a cutting edge technology to reduce harmful particulate matter in the engine exhaust. Porous filter-wall of DPF traps these soot particles in the after-treatment of the exhaust gas. To examine the phenomena inside the DPF, researchers are looking forward to use the Lattice Boltzmann Method as a promising alternative simulation tool. The lattice Boltzmann method is comparatively a newer numerical scheme and can be used to simulate fluid flow for single-component single-phase, single-component multi-phase. It is also an excellent method for modelling flow through disordered media. The current work focuses on a single-phase fluid flow simulation inside the porous micro-structure using LBM. Firstly, the theory concerning the development of LBM is discussed. LBM evolution is always related to Lattice gas Cellular Automata (LGCA), but it is also shown that this method is a special discretized form of the continuous Boltzmann equation. Since all the simulations are conducted in two-dimensions, the equations developed are in reference with D2Q9 (two-dimensional 9-velocity) model. The artificially created porous micro-structure is used in this study. The flow simulations are conducted by considering air and CO2 gas as fluids. The numerical model used in this study is explained with a flowchart and the coding steps. The numerical code is constructed in MATLAB. Different types of boundary conditions and their importance is discussed separately. Also the equations specific to boundary conditions are derived. The pressure and velocity contours over the porous domain are studied and recorded. The results are compared with the published work. The permeability values obtained in this study can be fitted to the relation proposed by Nabovati [8], and the results are in excellent agreement within porosity range of 0.4 to 0.8.
Resumo:
This technical report discusses the application of the Lattice Boltzmann Method (LBM) and Cellular Automata (CA) simulation in fluid flow and particle deposition. The current work focuses on incompressible flow simulation passing cylinders, in which we incorporate the LBM D2Q9 and CA techniques to simulate the fluid flow and particle loading respectively. For the LBM part, the theories of boundary conditions are studied and verified using the Poiseuille flow test. For the CA part, several models regarding simulation of particles are explained. And a new Digital Differential Analyzer (DDA) algorithm is introduced to simulate particle motion in the Boolean model. The numerical results are compared with a previous probability velocity model by Masselot [Masselot 2000], which shows a satisfactory result.
Resumo:
Im Beitrag wird ein neuartiges Förderprinzip zur federnden Aufnahme und zum Transport von massenhaft anfallenden Paketstrukturen vorgestellt. Das Förderprinzip beruht auf einem flächigen Tragmittel in Form eines veränderbaren, elastischen Verbundes von kleinskaligen Fördermodulen. Das konzipierte Transportprinzip mit peristaltischen Eigenschaften soll entstehende Staus der Pakete schnell auflösen und eine dedizierte Steuerung von Teilmengen zulassen, um den erforderlichen Durchsatz innerhalb eines Materialflusssystems zu erreichen. Diese Lösung ermöglicht eine sinnvolle Verknüpfung von Wirkprinzipien der Schüttgut- und Stückgutförderung zur Aufnahme und Fortbewegung von Pakete als Schüttgut. Die Grundfunktionalität des Förderkonzepts wird durch die numerische Simulation auf Basis der Diskrete Elemente Methode sowie der Mehrkörpersimulation überprüft.
Resumo:
BACKGROUND Microvascular anastomosis is the cornerstone of free tissue transfers. Irrespective of the microsurgical technique that one seeks to integrate or improve, the time commitment in the laboratory is significant. After extensive previous training on several animal models, we sought to identify an animal model that circumvents the following issues: ethical rules, cost, time-consuming and expensive anesthesia, and surgical preparation of tissues required to access vessels before performing the microsurgical training, not to mention that laboratories are closed on weekends. METHODS Between January 2012 and April 2012, a total of 91 earthworms were used for 150 microsurgical training exercises to simulate vascular end-to-side microanastomosis. The training sessions were divided into ten periods of 7 days. Each training session included 15 simulations of end-to-side vascular microanastomoses: larger than 1.5 mm (n=5), between 1.0 and 1.5 mm (n=5), and smaller than 1.0 mm (n=5). A linear model with the main variables being the number of weeks (as a numerical covariate) and the size of the animal (as a factor) was used to determine the trend in time of anastomosis over subsequent weeks as well as the differences between the different size groups. RESULTS The linear model shows a significant trend (p<0.001) in time of anastomosis in the course of the training, as well as significant differences (p<0.001) between the groups of animals of different sizes. For microanastomoses larger than 1.5 mm, the mean anastomosis time decreased from 19.3±1.0 to 11.1±0.4 min between the first and last week of training (decrease of 42.5%). For training with smaller diameters, the results showed a decrease in execution time of 43.2% (diameter between 1.0 and 1.5 mm) and 40.9% (diameter<1.0 mm) between the first and last periods. The study demonstrates an improvement in the dexterity and speed of nodes execution. CONCLUSION The earthworm appears to be a reliable experimental model for microsurgical training of end-to-side microanastomoses. Its numerous advantages are discussed here and we predict training on earthworms will significantly grow and develop in the near future. LEVEL OF EVIDENCE III This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Resumo:
The planning of refractive surgical interventions is a challenging task. Numerical modeling has been proposed as a solution to support surgical intervention and predict the visual acuity, but validation on patient specific intervention is missing. The purpose of this study was to validate the numerical predictions of the post-operative corneal topography induced by the incisions required for cataract surgery. The corneal topography of 13 patients was assessed preoperatively and postoperatively (1-day and 30-day follow-up) with a Pentacam tomography device. The preoperatively acquired geometric corneal topography – anterior, posterior and pachymetry data – was used to build patient-specific finite element models. For each patient, the effects of the cataract incisions were simulated numerically and the resulting corneal surfaces were compared to the clinical postoperative measurements at one day and at 30-days follow up. Results showed that the model was able to reproduce experimental measurements with an error on the surgically induced sphere of 0.38D one day postoperatively and 0.19D 30 days postoperatively. The standard deviation of the surgically induced cylinder was 0.54D at the first postoperative day and 0.38D 30 days postoperatively. The prediction errors in surface elevation and curvature were below the topography measurement device accuracy of ±5μm and ±0.25D after the 30-day follow-up. The results showed that finite element simulations of corneal biomechanics are able to predict post cataract surgery within topography measurement device accuracy. We can conclude that the numerical simulation can become a valuable tool to plan corneal incisions in cataract surgery and other ophthalmosurgical procedures in order to optimize patients' refractive outcome and visual function.
Resumo:
Numerical simulation experiments give insight into the evolving energy partitioning during high-strain torsion experiments of calcite. Our numerical experiments are designed to derive a generic macroscopic grain size sensitive flow law capable of describing the full evolution from the transient regime to steady state. The transient regime is crucial for understanding the importance of micro structural processes that may lead to strain localization phenomena in deforming materials. This is particularly important in geological and geodynamic applications where the phenomenon of strain localization happens outside the time frame that can be observed under controlled laboratory conditions. Ourmethod is based on an extension of the paleowattmeter approach to the transient regime. We add an empirical hardening law using the Ramberg-Osgood approximation and assess the experiments by an evolution test function of stored over dissipated energy (lambda factor). Parameter studies of, strain hardening, dislocation creep parameter, strain rates, temperature, and lambda factor as well asmesh sensitivity are presented to explore the sensitivity of the newly derived transient/steady state flow law. Our analysis can be seen as one of the first steps in a hybrid computational-laboratory-field modeling workflow. The analysis could be improved through independent verifications by thermographic analysis in physical laboratory experiments to independently assess lambda factor evolution under laboratory conditions.
Resumo:
Is numerical mimicry a third way of establishing truth? Kevin Heng received his M.S. and Ph.D. in astrophysics from the Joint Institute for Laboratory Astrophysics (JILA) and the University of Colorado at Boulder. He joined the Institute for Advanced Study in Princeton from 2007 to 2010, first as a Member and later as the Frank & Peggy Taplin Member. From 2010 to 2012 he was a Zwicky Prize Fellow at ETH Z¨urich (the Swiss Federal Institute of Technology). In 2013, he joined the Center for Space and Habitability (CSH) at the University of Bern, Switzerland, as a tenure-track assistant professor, where he leads the Exoplanets and Exoclimes Group. He has worked on, and maintains, a broad range of interests in astrophysics: shocks, extrasolar asteroid belts, planet formation, fluid dynamics, brown dwarfs and exoplanets. He coordinates the Exoclimes Simulation Platform (ESP), an open-source set of theoretical tools designed for studying the basic physics and chemistry of exoplanetary atmospheres and climates (www.exoclime.org). He is involved in the CHEOPS (Characterizing Exoplanet Satellite) space telescope, a mission approved by the European Space Agency (ESA) and led by Switzerland. He spends a fair amount of time humbly learning the lessons gleaned from studying the Earth and Solar System planets, as related to him by atmospheric, climate and planetary scientists. He received a Sigma Xi Grant-in-Aid of Research in 2006
Resumo:
Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.