996 resultados para Source Terms
Resumo:
Universe evolution, as described by Friedmann's equations, is determined by source terms fixed by the choice of pressure x energy density equations of state p(p). The usual approach in cosmology considers equations of state accounting only for kinematic terms, ignoring the contribution from the interactions between the particles constituting the source fluid. In this work the importance of these neglected terms is emphasized. A systematic method, based on the statistical mechanics of real fluids, is proposed to include them. A toy model is presented which shows how such interaction terms could be applied to engender significant cosmological effects.
Resumo:
The environmental impact of systems managing large (kg) tritium amount represents a public scrutiny issue for the next coming fusion facilities as ITER and DEMO. Furthermore, potentially new dose limits imposed by international regulations (ICRP) shall impact next coming devices designs and the overall costs of fusion technology deployment. Refined environmental tritium dose impact assessment schemes are then overwhelming. Detailed assessments can be procured from the knowledge of the real boundary conditions of the primary tritium discharge phase into atmosphere (low levels) and into soils. Lagrangian dispersion models using real-time meteorological and topographic data provide a strong refinement. Advance simulation tools are being developed in this sense. The tool integrates a numerical model output records from European Centre for Medium range Weather Forecast (ECMWF) with a lagrangian atmospheric dispersion model (FLEXPART). The composite model ECMWF/FLEXTRA results can be coupled with tritium dose secondary phase pathway assessment tools. Nominal tritium discharge operational reference and selected incidental ITER-like plant systems tritium form source terms have been assumed. The realtime daily data and mesh-refined records together with lagrangian dispersion model approach provide accurate results for doses to population by inhalation or ingestion in the secondary phase
Resumo:
Desde el año 2004 el código ARWEN ha sido utilizado con éxito para simular y diseñar experimentos relacionados con blancos para fusión por confinamiento inercial [146], astrofísica de laboratorio [145], plasmas como amplificadores de láseres de rayos X [107] o plasmas creados por láser para la medición de espectros de transmisión. Para la realización de estas simulaciones es necesario, además de métodos de alto orden precisos y que presenten buenas propiedades conservativas, conocer ciertas propiedades de los plasmas. En el caso de la fluidodinámica y la conducción electrónica necesitaremos conocer la ecuación de estado [94, 49, 36], y para el transporte de la radiación será preciso disponer de los datos de absorción y emisión [104, 95, 40]. Hasta el año 2009 ARWEN dependía de códigos externos para la generación de estas tablas de opacidad, careciendo de control sobre los métodos empleados para su generación. Además estos códigos asumían equilibrio local termodinámico (LTE), limitando su validez a rangos de alta densidad y baja temperatura. En el marco de esta tesis se ha desarrollado el código BIGBART para la generación de tablas detalladas de opacidad y emisividad para su uso en el módulo de transporte de radiación. De esta forma el grupo dispondrá de su propia herramienta de generación de propiedades radiativas. El código desarrollado es capaz de tratar plasmas en estado fuera de equilibrio (non-LTE) mediante el modelo colisional-radiativo, extendiendo así el rango de validez de las tablas generadas. El trabajo desarrollado para implementar un código LTE/non-LTE estacionario es el siguiente Cálculo de estructura y datos atómicos. Se ha acoplado en código FAC a BIGBART, incorporando la capacidad para generar potenciales atómicos para una configuración y el cálculo de funciones de onda de electrones en orbitales ligados y libres. Aproximaciones y métodos para la obtención de tasas y secciones eficaces de procesos. Se han incluido y programado los modelos implementados en FAC para el cálculo de secciones eficaces de fotoionización, y tasas de decaimiento de emisión espontánea y autoionización. Además se ha incluido el modelo Plane-Wave Born (PWBA) para el cálculo de las secciones eficaces de ionización y excitación colisional. Modelos para la obtención de la distribución de estados iónicos dentro del plasma. Se ha programado un solver LTE basado en la ecuación de Saha-Boltzmann con efectos de ionización por presión debida a los iones adyacentes. También se ha implementado un modelo non-LTE colisionalradiativo para la resolución del sistema de ecuaciones que nos permite obtener la densidad de estados iónicos fuera de equilibrio. Modelo non-LTE RADIOM. Se ha implementado el modelo RADIOM para aproximar efectos de no-equilibrio mediante cálculos LTE a una temperatura equivalente, menor o igual que la temperatura electrónica real. Cálculo de las propiedades espectrales de absorción y emisión. Se han implementado los modelos para el cálculo de los perfiles espectrales de absorción y emisión para procesos entre niveles ligados, ligado-libre y librelibre. Aprovechando el trabajo realizado en este sentido, durante el transcurso de esta tesis se amplió el código BIGBART para tratar problemas con dependencia temporal. La extensión para tratar este tipo de problemas se orientó a la simulación numérica de la interacción de láseres ultra intensos en el rango XUV/rayos X. Para ello, además de adaptar el modelo non-LTE colisionalradiativo se incluyeron procesos adicionales asociados a la interacción de la materia con fotones altamente energéticos. También se han incluido modelos para el cálculo de las propiedades ópticas, y por ende las propiedades dieléctricas de la materia irradiada, de gran interés en algunas aplicaciones novedosas de estos láseres intensos. Debido a la naturaleza fuertemente fuera de equilibrio en la interacción de fotones de alta energía con la materia, se incluyó el tratamiento de la distribución de electrones libres fuera de equilibrio en la aproximación de Fokker-Planck, tanto para condiciones degeneradas como no degeneradas. El trabajo desarrollado en el código non-LTE con dependencia temporal es el siguiente Procesos asociados a láseres intensos XUV/rayos X. Se ha implementado el cálculo de procesos radiativos estimulados de absorción y emisión por el láser. También se han incluido procesos asociados a la creación de vacantes en capas internas electrónicas (Shake), además de doble autoionización y doble fotoionización. Cálculo de propiedades ópticas y dieléctricas en blancos sólidos. Se ha implementado un modelo para la absorción por bremsstrahlung inverso en blancos en estado sólido. Con el coeficiente de extinción debido a procesos de fotoabsorción resonante, fotoionización y bremsstrahlung inverso se obtiene el ´ındice de refracción mediante la relación de Kronig-Kramers. Electrones fuera de equilibrio. Se ha tratado la evolución de la distribución de electrones, cuando no está justificado asumir que es Maxwelliana o de Fermi-Dirac, mediante la aproximación de Fokker-Planck para la colisión entre electrones libres. En la resolución de la ecuación de Fokker-Planck se han incluido los procesos inelásticos por colisiones con iones y términos fuente por interacción con el láser y otros procesos. ABSTRACT Since 2004 the ARWEN code has been successfully used to simulate and design targets for inertial confinement fusion experiments [146], laboratory astrophysics [145], plasmas as X-ray lasers amplifiers [107] or laser created plasmas for measuring transmission spectra. To perform these simulations it is necessary, in addition to high order precise methods with good conservative properties, to know certain properties of plasmas. For fluid dynamic and electronic conduction we need to know the equation of state [94, 49, 36], and for radiation transport it will be necessary to have the data of the absorption and emission [104, 95, 40]. Until 2009 ARWEN depended on external codes to generate these opacity tables, lacking of control over the methods used for their generation. Besides, these codes assumed local thermodynamic equilibrium (LTE), limiting their validity ranges to high densities and low temperatures. As part of this thesis it has been developed the BIGBART code for generating detailed opacity and emissivity tables for use in the radiation transport module. This group will have its own tool for the generation of radiative properties. The developed code is capable of treating plasmas out of equilibrium (non-LTE) by means of a collisional-radiative model, extending the range of validity of the generated tables. The work to implement an LTE/non-LTE steady-state code is as follows Calculation of structure and atomic data. the FAC code was coupled to BIGBART, incorporating the ability to generate atomic potentials for calculating configuration wave functions for bound and free electrons. Approaches and methods for obtaining cross sections and processes rates. We have included and reprogrammed in Fortran the models implemented in FAC for calculation of photoionization cross sections and decay rates of spontaneous emission and autoionization. We also included the Plane- Wave Born (PWBA) model to calculate the cross sections of ionization and collisional excitation. Models for the obtention of the distribution of ionic states within the plasma. We programmed a LTE solver based on the Saha-Boltzmann equation with pressure ionization effects due to adjacent ions. It has also been implemented a non-LTE collisional-radiative model for solving the system of equations that allows us to obtain the density of ionic states out of equilibrium. Non-LTE RADIOM model. We have implemented the non-LTE RADIOM model to approximate non-equilibrium effects with LTE data at an equivalent temperature, lower or equal to the actual electronic temperature. Calculation of the spectral absorption and emission properties. Models have been implemented for the calculation of the spectral profiles of absorption and emission processes between bound levels, free-bound and free-free. Taking advantage of the work done in this direction throughout the course of this thesis the code BIGBART was extended to treat time-dependent problems. The extension to treat such problems is oriented to the numerical simulation of the interaction of ultra intense lasers in the XUV/X-ray range. For this range, in addition to adapting the non-LTE collisional-radiative model, additional processes associated with the interaction of matter with high energy photons. We also included models for calculation of the optical properties, and therefore the dielectric properties of the irradiated material, of great interest in some novel applications of these intense lasers. Due to the strong non-equilibrium nature of the interaction of high energy photons with matter, we included the treatment of the distribution of free electrons out of equilibrium in the Fokker-Planck approximation for both degenerate and non-degenerate conditions. The work in the non-LTE time-dependent code is as follows Processes associated with intense XUV/X-ray lasers. We have implemented the calculation of stimulated radiative processes in absorption and emission. Also we included processes associated with the creation of electronic vacancies in inner shells (Shake), double autoionization and double photoionization. Calculation of optical and dielectric properties in solid targets. We have implemented a model for inverse bremsstrahlung absorption in solid targets. With the extinction coefficient from resonant photoabsorption, photoionization and inverse bremsstrahlung the refractive index is obtained by the Kramers-Kronig relation. Electrons out of equilibrium. We treat the evolution of the electron distribution, when it is not justified to assume a Maxwellian or Fermi-Dirac distribution, by the Fokker-Planck approximation for collisions between electrons. When solving the Fokker-Planck equation we included inelastic collision processes with ions and source terms by interaction with the laser and other processes.
Resumo:
Convection in the tropics is observed to involve a wide-ranging hierarchy of scales from a few kilometers to the planetary scales and also has a profound impact on short-term climate. The mechanisms responsible for this behavior present a major unsolved problem. A promising emerging approach to address these issues is cloud-resolving modeling. Here a family of numerical models is introduced specifically to model the feedback of small-scale deep convection on tropical planetary waves and tropical circulation in a highly efficient manner compatible with the approach through cloud-resolving modeling. Such a procedure is also useful for theoretical purposes. The basic idea in the approach is to use low-order truncation in the meriodonal direction through Gauss–Hermite quadrature projected onto a simple discrete radiation condition. In this fashion, the cloud-resolving modeling of equatorially trapped planetary waves reduces to the solution of a small number of purely zonal two-dimensional wave systems along a few judiciously chosen meriodonal layers that are coupled only by some additional source terms. The approach is analyzed in detail with full mathematical rigor for linearized equatorial primitive equations with source terms.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
This work presents a two-dimensional approach of risk assessment method based on the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The risk is calculated using Monte Carlo simulation methods whereby synthetic contaminant source terms were generated to the same distribution as historically occurring pollution events or a priori potential probability distribution. The spatial and temporal distributions of the generated contaminant concentrations at pre-defined monitoring points within the aquifer were then simulated from repeated realisations using integrated mathematical models. The number of times when user defined ranges of concentration magnitudes were exceeded is quantified as risk. The utilities of the method were demonstrated using hypothetical scenarios, and the risk of pollution from a number of sources all occurring by chance together was evaluated. The results are presented in the form of charts and spatial maps. The generated risk maps show the risk of pollution at each observation borehole, as well as the trends within the study area. This capability to generate synthetic pollution events from numerous potential sources of pollution based on historical frequency of their occurrence proved to be a great asset to the method, and a large benefit over the contemporary methods.
Resumo:
Activated sludge basins (ASBs) are a key-step in wastewater treatment processes that are used to eliminate biodegradable pollution from the water discharged to the natural environment. Bacteria found in the activated sludge consume and assimilate nutrients such as carbon, nitrogen and phosphorous under specific environmental conditions. However, applying the appropriate agitation and aeration regimes to supply the environmental conditions to promote the growth of the bacteria is not easy. The agitation and aeration regimes that are applied to activated sludge basins have a strong influence on the efficacy of wastewater treatment processes. The major aims of agitation by submersible mixers are to improve the contact between biomass and wastewater and the prevention of biomass settling. They induce a horizontal flow in the oxidation ditch, which can be quantified by the mean horizontal velocity. Mean values of 0.3-0.35 m s-1 are recommended as a design criteria to ensure best conditions for mixing and aeration (Da Silva, 1994). To give circulation velocities of this order of magnitude, the positioning and types of mixers are chosen from the plant constructors' experience and the suppliers' data for the impellers. Some case studies of existing plants have shown that measured velocities were not in the range that was specified in the plant design. This illustrates that there is still a need for design and diagnosis approach to improve process reliability by eliminating or reducing the number of short circuits, dead zones, zones of inefficient mixing and poor aeration. The objective of the aeration is to facilitate the quick degradation of pollutants by bacterial growth. To achieve these objectives a wastewater treatment plant must be adequately aerated; thus resulting in 60-80% of all energetic consummation being dedicated to the aeration alone (Juspin and Vasel, 2000). An earlier study (Gillot et al., 1997) has illustrated the influence that hydrodynamics have on the aeration performance as measure by the oxygen transfer coefficient. Therefore, optimising the agitation and aeration systems can enhance the oxygen transfer coefficient and consequently reduce the operating costs of the wastewater treatment plant. It is critically important to correctly estimate the mass transfer coefficient as any errors could result in the simulations of biological activity not being physically representative. Therefore, the transfer process was rigorously examined in several different types of process equipment to determine the impact that different hydrodynamic regimes and liquid-side film transfer coefficients have on the gas phase and the mass transfer of oxygen. To model the biological activity occurring in ASBs, several generic biochemical reaction models have been developed to characterise different biochemical reaction processes that are known as Activated Sludge Models, ASM (Henze et al., 2000). The ASM1 protocol was selected to characterise the impact of aeration on the bacteria consuming and assimilating ammonia and nitrate in the wastewater. However, one drawback of ASM protocols is that the hydrodynamics are assumed to be uniform by the use of perfectly mixed, plug flow reactors or as a number of perfectly mixed reactors in series. This makes it very difficult to identify the influence of mixing and aeration on oxygen mass transfer and biological activity. Therefore, to account for the impact of local gas-liquid mixing regime on the biochemical activity Computational Fluid Dynamics (CFD) was used by applying the individual ASM1 reaction equations as the source terms to a number of scalar equations. Thus, the application of ASM1 to CFD (FLUENT) enabled the investigation of the oxygen transfer efficiency and the carbon & nitrogen biological removal in pilot (7.5 cubic metres) and plant scale (6000 cubic metres) ASBs. Both studies have been used to validate the effect that the hydrodynamic regime has on oxygen mass transfer (the circulation velocity and mass transfer coefficient) and the effect that this had on the biological activity on pollutants such as ammonia and nitrate (Cartland Glover et al., 2005). The work presented here is one part to of an overall approach for improving the understanding of ASBs and the impact that they have in terms of the hydraulic and biological performance on the overall wastewater treatment process. References CARTLAND GLOVER G., PRINTEMPS C., ESSEMIANI K., MEINHOLD J., (2005) Modelling of wastewater treatment plants ? How far shall we go with sophisticated modelling tools? 3rd IWA Leading-Edge Conference & Exhibition on Water and Wastewater Treatment Technologies, 6-8 June 2005, Sapporo, Japan DA SILVA G. (1994). Eléments d'optimisation du transfert d'oxygène par fines bulles et agitateur séparé en chenal d'oxydation. PhD Thesis. CEMAGREF Antony ? France. GILLOT S., DERONZIER G., HEDUIT A. (1997). Oxygen transfer under process conditions in an oxidation ditch equipped with fine bubble diffusers and slow speed mixers. WEFTEC, Chicago, USA. HENZE M., GUJER W., MINO T., van LOOSDRECHT M., (2000). Activated Sludge Models ASM1, ASM2, ASM2D and ASM3, Scientific and Technical Report No. 9. IWA Publishing, London, UK. JUSPIN H., VASEL J.-L. (2000). Influence of hydrodynamics on oxygen transfer in the activated sludge process. IWA, Paris - France.
Resumo:
Secondary pyrolysis in fluidized bed fast pyrolysis of biomass is the focus of this work. A novel computational fluid dynamics (CFD) model coupled with a comprehensive chemistry scheme (134 species and 4169 reactions, in CHEMKIN format) has been developed to investigate this complex phenomenon. Previous results from a transient three-dimensional model of primary pyrolysis were used for the source terms of primary products in this model. A parametric study of reaction atmospheres (H2O, N2, H2, CO2, CO) has been performed. For the N2 and H2O atmosphere, results of the model compared favorably to experimentally obtained yields after the temperature was adjusted to a value higher than that used in experiments. One notable deviation versus experiments is pyrolytic water yield and yield of higher hydrocarbons. The model suggests a not overly strong impact of the reaction atmosphere. However, both chemical and physical effects were observed. Most notably, effects could be seen on the yield of various compounds, temperature profile throughout the reactor system, residence time, radical concentration, and turbulent intensity. At the investigated temperature (873 K), turbulent intensity appeared to have the strongest influence on liquid yield. With the aid of acceleration techniques, most importantly dimension reduction, chemistry agglomeration, and in-situ tabulation, a converged solution could be obtained within a reasonable time (∼30 h). As such, a new potentially useful method has been suggested for numerical analysis of fast pyrolysis.
Resumo:
An Euler-Lagrange particle tracking model, developed for simulating fire atmosphere/sprinkler spray interactions, is described. Full details of the model along with the approximations made and restrictions applying are presented. Errors commonly found in previous formulations of the source terms used in this two-phase approach are described and corrected. In order to demonstrate the capabilities of the model it is applied to the simulation of a fire in a long corridor containing a sprinkler. The simulation presented is three-dimensional and transient and considers mass, momentum and energy transfer between the gaseous atmosphere and injected liquid droplets.
Resumo:
Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.
Resumo:
Recent developments in the physical parameterizations available in spectral wave models have already been validated, but there is little information on their relative performance especially with focus on the higher order spectral moments and wave partitions. This study concentrates on documenting their strengths and limitations using satellite measurements, buoy spectra, and a comparison between the different models. It is confirmed that all models perform well in terms of significant wave heights; however higher-order moments have larger errors. The partition wave quantities perform well in terms of direction and frequency but the magnitude and directional spread typically have larger discrepancies. The high-frequency tail is examined through the mean square slope using satellites and buoys. From this analysis it is clear that some models behave better than the others, suggesting their parameterizations match the physical processes reasonably well. However none of the models are entirely satisfactory, pointing to poorly constrained parameterizations or missing physical processes. The major space-time differences between the models are related to the swell field stressing the importance of describing its evolution. An example swell field confirms the wave heights can be notably different between model configurations while the directional distributions remain similar. It is clear that all models have difficulty in describing the directional spread. Therefore, knowledge of the source term directional distributions is paramount in improving the wave model physics in the future.
Resumo:
Mode of access: Internet.
Resumo:
Strategies aimed at improving spinal cord regeneration after trauma are still challenging neurologists and neuroscientists throughout the world. Many cell-based therapies have been tested, with limited success in terms of functional outcome. In this study, we investigated the effects of human dental pulp cells (HDPCs) in a mouse model of compressive spinal cord injury (SCI). These cells present some advantages, such as the ease of the extraction process, and expression of trophic factors and embryonic markers from both ecto-mesenchymal and mesenchymal components. Young adult female C57/BL6 mice were subjected to laminectomy at T9 and compression of the spinal cord with a vascular clip for 1 min. The cells were transplanted 7 days or 28 days after the lesion, in order to compare the recovery when treatment is applied in a subacute or chronic phase. We performed quantitative analyses of white-matter preservation, trophic-factor expression and quantification, and ultrastructural and functional analysis. Our results for the HDPC-transplanted animals showed better white-matter preservation than the DMEM groups, higher levels of trophic-factor expression in the tissue, better tissue organization, and the presence of many axons being myelinated by either Schwann cells or oligodendrocytes, in addition to the presence of some healthy-appearing intact neurons with synapse contacts on their cell bodies. We also demonstrated that HDPCs were able to express some glial markers such as GFAP and S-100. The functional analysis also showed locomotor improvement in these animals. Based on these findings, we propose that HDPCs may be feasible candidates for therapeutic intervention after SCI and central nervous system disorders in humans.
Resumo:
The inhibitory effect of sucrose on the kinetics of thrombin-catalyzed hydrolysis of the chromogenic substrate S-2238 (D-phenylalanyl-pipecolyl-arginoyl-p-nitroanilide) is re-examined as a possible consequence of thermodynamic non-ideality-an inhibition originally attributed to the increased viscosity of reaction mixtures. However, those published results may also be rationalized in terms of the suppression of a substrate-induced isomerization of thrombin to a slightly more expanded (or more asymmetric) transition state prior to the irreversible kinetic steps that lead to substrate hydrolysis. This reinterpretation of the kinetic results solely in terms of molecular crowding does not signify the lack of an effect of viscosity on any reaction step(s) subject to diffusion control. Instead, it highlights the need for development of analytical procedures that can accommodate the concomitant operation of thermodynamic non-ideality and viscosity effects.
Resumo:
Development of Dual Source Computed Tomography (Definition, Siemens Medical Solutions, Erlanger, Germany) allowed advances in temporal resolution, with the addition of a second X-ray source and an array of detectors to the TCM 64 slices. The ability to run exams on Dual Energy, allows greater differentiation of tissues, showing differences between closer attenuation coefficients. In terms of renal applications, the distinction of kidney stones and masses become one of the main advantages of the use of dual-energy technology. This article pretends to demonstrate operating principles of this equipment, as its main renal applications.