932 resultados para Fluid filtration model
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The interaction of a comet with the solar wind undergoes various stages as the comet’s activity varies along its orbit. For a comet like 67P/Churyumov–Gerasimenko, the target comet of ESA’s Rosetta mission, the various features include the formation of a Mach cone, the bow shock, and close to perihelion even a diamagnetic cavity. There are different approaches to simulate this complex interplay between the solar wind and the comet’s extended neutral gas coma which include magnetohydrodynamics (MHD) and hybrid-type models. The first treats the plasma as fluids (one fluid in basic single fluid MHD) and the latter treats the ions as individual particles under the influence of the local electric and magnetic fields. The electrons are treated as a charge-neutralizing fluid in both cases. Given the different approaches both models yield different results, in particular for a low production rate comet. In this paper we will show that these differences can be reduced when using a multifluid instead of a single-fluid MHD model and increase the resolution of the Hybrid model. We will show that some major features obtained with a hybrid type approach like the gyration of the cometary heavy ions and the formation of the Mach cone can be partially reproduced with the multifluid-type model.
Resumo:
A numerical model of sulfate reduction and isotopic fractionation has been applied to pore fluid SO4**2- and d34S data from four sites drilled during Ocean Drilling Program (ODP) Leg 168 in the Cascadia Basin at 48°N, where basement temperatures reach up to 62°C. There is a source of sulfate both at the top and the bottom of the sediment column due to the presence of basement fluid flow, which promotes bacterial sulfate reduction below the sulfate minimum zone at elevated temperatures. Pore fluid d34S data show the highest values (135 per mil) yet found in the marine environment. The bacterial sulfur isotopic fractionation factor, a, is severely underestimated if the pore fluids of anoxic marine sediments are assumed to be closed systems and Rayleigh fractionation plots yield erroneous values for a by as much as 15 per mil in diffusive and advective pore fluid regimes. Model results are consistent with a = 1.077+/-0.007 with no temperature effect over the range 1.8 to 62°C and no effect of sulfate reduction rate over the range 2 to 10 pmol/ccm/day. The reason for this large isotopic fractionation is unknown, but one difference with previous studies is the very low sulfate reduction rates recorded, about two orders of magnitude lower than literature values that are in the range of µmol/ccm/day to tens of nmol/ccm/day. In general, the greatest 34S depletions are associated with the lowest sulfate reduction rates and vice versa, and it is possible that such extreme fractionation is a characteristic of open systems with low sulfate reduction rates.
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
This paper aims to present and validate a numerical technique for the simulation of the overtopping and onset of failure in rockfill dams due to mass sliding. This goal is achieved by coupling a fluid dynamic model for the simulation of the free surface and through-flow problems, with a numerical technique for the calculation of the rockfill response and deformation. Both the flow within the dam body and in its surroundings are taken into account. An extensive validation of the resulting computational method is performed by solving several failure problems on physical models of rockfill dams for which experimental results have been obtained by the authors.
Resumo:
We investigate the dynamics of localized solutions of the relativistic cold-fluid plasma model in the small but finite amplitude limit, for slightly overcritical plasma density. Adopting a multiple scale analysis, we derive a perturbed nonlinear Schrödinger equation that describes the evolution of the envelope of circularly polarized electromagnetic field. Retaining terms up to fifth order in the small perturbation parameter, we derive a self-consistent framework for the description of the plasma response in the presence of localized electromagnetic field. The formalism is applied to standing electromagnetic soliton interactions and the results are validated by simulations of the full cold-fluid model. To lowest order, a cubic nonlinear Schrödinger equation with a focusing nonlinearity is recovered. Classical quasiparticle theory is used to obtain analytical estimates for the collision time and minimum distance of approach between solitons. For larger soliton amplitudes the inclusion of the fifth-order terms is essential for a qualitatively correct description of soliton interactions. The defocusing quintic nonlinearity leads to inelastic soliton collisions, while bound states of solitons do not persist under perturbations in the initial phase or amplitude
Resumo:
CFD simulations of the 75 mm, hydrocyclone of Hsieh (1988) have been conducted using Fluent TM. The simulations used 3-dimensional body fitted grids. The simulations were two phase simulations where the air core was resolved using the mixture (Manninen et al., 1996) and VOF (Hirt and Nichols, 1981) models. Velocity predictions from large eddy simulations (LES), using the Smagorinsky-Lilly sub grid scale model (Smagorinsky, 1963; Lilly, 1966) and RANS simulations using the differential Reynolds stress turbulence model (Launder et al., 1975) were compared with Hsieh's experimental velocity data. The LES simulations gave very good agreement with Hsieh's data but required very fine grids to predict the velocities correctly in the bottom of the apex. The DRSM/RANS simulations under predicted tangential velocities, and there was little difference between the velocity predictions using the linear (Launder, 1989) and quadratic (Speziale et al., 1991) pressure strain models. Velocity predictions using the DRSM turbulence model and the linear pressure strain model could be improved by adjusting the pressure strain model constants.
Resumo:
La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.
Resumo:
B-ISDN is a universal network which supports diverse mixes of service, applications and traffic. ATM has been accepted world-wide as the transport technique for future use in B-ISDN. ATM, being a simple packet oriented transfer technique, provides a flexible means for supporting a continuum of transport rates and is efficient due to possible statistical sharing of network resources by multiple users. In order to fully exploit the potential statistical gain, while at the same time provide diverse service and traffic mixes, an efficient traffic control must be designed. Traffic controls which include congestion and flow control are a fundamental necessity to the success and viability of future B-ISDN. Congestion and flow control is difficult in the broadband environment due to the high speed link, the wide area distance, diverse service requirements and diverse traffic characteristics. Most congestion and flow control approaches in conventional packet switched networks are reactive in nature and are not applicable in the B-ISDN environment. In this research, traffic control procedures mainly based on preventive measures for a private ATM-based network are proposed and their performance evaluated. The various traffic controls include CAC, traffic flow enforcement, priority control and an explicit feedback mechanism. These functions operate at call level and cell level. They are carried out distributively by the end terminals, the network access points and the internal elements of the network. During the connection set-up phase, the CAC decides the acceptance or denial of a connection request and allocates bandwidth to the new connection according to three schemes; peak bit rate, statistical rate and average bit rate. The statistical multiplexing rate is based on a `bufferless fluid flow model' which is simple and robust. The allocation of an average bit rate to data traffic at the expense of delay obviously improves the network bandwidth utilisation.
Resumo:
This thesis describes work carried out to improve the fundamental modelling of liquid flows on distillation trays. A mathematical model is presented based on the principles of computerised fluid dynamics. It models the liquid flow in the horizontal directions allowing for the effects of the vapour through the use of an increased liquid turbulence, modelled by an eddy viscosity, and a resistance to liquid flow caused by the vapour being accelerated horizontally by the liquid. The resultant equations are similar to the Navier-Stokes equations with the addition of a resistance term.A mass-transfer model is used to calculate liquid concentration profiles and tray efficiencies. A heat and mass transfer analogy is used to compare theoretical concentration profiles to experimental water-cooling data obtained from a 2.44 metre diameter air-water distillation simulation rig. The ratios of air to water flow rates are varied in order to simulate three pressures: vacuum, atmospheric pressure and moderate pressure.For simulated atmospheric and moderate pressure distillation, the fluid mechanical model constantly over-predicts tray efficiencies with an accuracy of between +1.7% and +11.3%. This compares to -1.8% to -10.9% for the stagnant regions model (Porter et al. 1972) and +12.8% to +34.7% for the plug flow plus back-mixing model (Gerster et al. 1958). The model fails to predict the flow patterns and tray efficiencies for vacuum simulation due to the change in the mechanism of liquid transport, from a liquid continuous layer to a spray as the liquid flow-rate is reduced. This spray is not taken into account in the development of the fluid mechanical model. A sensitivity analysis carried out has shown that the fluid mechanical model is relatively insensitive to the prediction of the average height of clear liquid, and a reduction in the resistance term results in a slight loss of tray efficiency. But these effects are not great. The model is quite sensitive to the prediction of the eddy viscosity term. Variations can produce up to a 15% decrease in tray efficiency. The fluid mechanical model has been incorporated into a column model so that statistical optimisation techniques can be employed to fit a theoretical column concentration profile to experimental data. Through the use of this work mass-transfer data can be obtained.
Resumo:
The possible evaporation of lubricant in fluid film bearings has been investigated theoretically and by experiment using a radial flow hydrostatic bearing supplied with liquid refrigerant R114. Good correlation between measured and theoretical values was obtained using a bespoke computational fluid dynamic model in which the flow was assumed to be laminar and adiabatic. The effects of viscous dissipation and vapour generation within the fluid film are fully accounted for by applying a fourth order Runge-Kutta routine to satisfy the radial and filmwise transverse constraints of momentum, energy and mass conservation. The results indicate that the radial velocity profile remains parabolic while the flow remains in the liquid phase and that the radial rate of enthalpy generation is then constant across the film at a given radius. The results also show that evaporation will commence at a radial location determined by geometry and flow conditions and in fluid layers adjacent to the solid boundaries. Evaporation is shown to progress in the radial direction and the load carrying capacity of such a bearing is reduced significantly. Expressions for the viscosity of the liquid/vapour mixture found in the literature survey have not been tested against experimental data. A new formulation is proposed in which the suitable choice of a characteristic constant yields close representation to any of these expressions. Operating constraints imposed by the design of the experimental apparatus limited the extent of the surface over which evaporation could be obtained, and prevented clear identification of the most suitable relationship for the viscosity of the liquid/vapour mixture. The theoretical model was extended to examine the development of two phase flow in a rotating shaft face seal of uniform thickness. Previous theoretical analyses have been based on the assumption that the radial velocity profile of the flow is always parabolic, and that the tangential component of velocity varies linearly from the value at the rotating surface, to zero at the stationary surface. The computational fluid dynamic analysis shows that viscous shear and dissipation in the fluid adjacent to the rotating surface leads to developing evaporation with a consequent reduction in tangential shear forces. The tangential velocity profile is predicted to decay rapidly through the film, exhibiting a profile entirely different to that assumed by previous investigators. Progressive evaporation takes place close to the moving wall and does not occur completely at a single radial location, as has been claimed in earlier work.
Resumo:
This paper presents the first part of a study of the combustion processes in an industrial radiant tube burner (RTB). The RTB is used typically in heat-treating furnaces. The work was initiated because of the need for improvements in burner lifetime and performance. The present paper is concerned with the flow of combustion air; a future paper will address the combusting flow. A detailed three-dimensional computational fluid dynamics model of the burner was developed, validated with experimental air flow velocity measurements using a split-film probe. Satisfactory agreement was achieved using the k-e turbulence model. Various features along the air inlet passage were subsequently analysed. The effectiveness of the air recuperator swirler was found to be significantly compromised by the need for a generous assembly tolerance. Also, a substantial circumferential flow maldistribution introduced by the swirler is effectively removed by the positioning of a constriction in the downstream passage.
Resumo:
This paper describes a study of the combustion process in an industrial radiant tube burner (RTB), used in heat treating furnaces, as part of an attempt to improve burner performance. A detailed three-dimensional Computational Fluid Dynamics model has been used, validated with experimental test furnace temperature and flue gas composition measurements. Simulations using the Eddy Dissipation combustion model with peak temperature limitation and the Discrete Transfer radiation model showed good agreement with temperature measurements in the inner and outer walls of the burner, as well as with flue gas composition measured at the exhaust (including NO). Other combustion and radiation models were also tested but gave inferior results in various aspects. The effects of certain RTB design features are analysed, and an analysis of the heat transfer processes within the burner is presented.
Resumo:
As complex radiotherapy techniques become more readily-practiced, comprehensive 3D dosimetry is a growing necessity for advanced quality assurance. However, clinical implementation has been impeded by a wide variety of factors, including the expense of dedicated optical dosimeter readout tools, high operational costs, and the overall difficulty of use. To address these issues, a novel dry-tank optical CT scanner was designed for PRESAGE 3D dosimeter readout, relying on 3D printed components and omitting costly parts from preceding optical scanners. This work details the design, prototyping, and basic commissioning of the Duke Integrated-lens Optical Scanner (DIOS).
The convex scanning geometry was designed in ScanSim, an in-house Monte Carlo optical ray-tracing simulation. ScanSim parameters were used to build a 3D rendering of a convex ‘solid tank’ for optical-CT, which is capable of collimating a point light source into telecentric geometry without significant quantities of refractive-index matched fluid. The model was 3D printed, processed, and converted into a negative mold via rubber casting to produce a transparent polyurethane scanning tank. The DIOS was assembled with the solid tank, a 3W red LED light source, a computer-controlled rotation stage, and a 12-bit CCD camera. Initial optical phantom studies show negligible spatial inaccuracies in 2D projection images and 3D tomographic reconstructions. A PRESAGE 3D dose measurement for a 4-field box treatment plan from Eclipse shows 95% of voxels passing gamma analysis at 3%/3mm criteria. Gamma analysis between tomographic images of the same dosimeter in the DIOS and DLOS systems show 93.1% agreement at 5%/1mm criteria. From this initial study, the DIOS has demonstrated promise as an economically-viable optical-CT scanner. However, further improvements will be necessary to fully develop this system into an accurate and reliable tool for advanced QA.
Pre-clinical animal studies are used as a conventional means of translational research, as a midpoint between in-vitro cell studies and clinical implementation. However, modern small animal radiotherapy platforms are primitive in comparison with conventional linear accelerators. This work also investigates a series of 3D printed tools to expand the treatment capabilities of the X-RAD 225Cx orthovoltage irradiator, and applies them to a feasibility study of hippocampal avoidance in rodent whole-brain radiotherapy.
As an alternative material to lead, a novel 3D-printable tungsten-composite ABS plastic, GMASS, was tested to create precisely-shaped blocks. Film studies show virtually all primary radiation at 225 kVp can be attenuated by GMASS blocks of 0.5cm thickness. A state-of-the-art software, BlockGen, was used to create custom hippocampus-shaped blocks from medical image data, for any possible axial treatment field arrangement. A custom 3D printed bite block was developed to immobilize and position a supine rat for optimal hippocampal conformity. An immobilized rat CT with digitally-inserted blocks was imported into the SmART-Plan Monte-Carlo simulation software to determine the optimal beam arrangement. Protocols with 4 and 7 equally-spaced fields were considered as viable treatment options, featuring improved hippocampal conformity and whole-brain coverage when compared to prior lateral-opposed protocols. Custom rodent-morphic PRESAGE dosimeters were developed to accurately reflect these treatment scenarios, and a 3D dosimetry study was performed to confirm the SmART-Plan simulations. Measured doses indicate significant hippocampal sparing and moderate whole-brain coverage.