1000 resultados para Escoamento multifásico - Métodos de simulação
Análise de escoamentos incompressíveis utilizando simulação de grandes escalas e adaptação de malhas
Resumo:
No presente estudo, são apresentadas soluções numéricas de problemas de Engenharia, na área de Dinâmica dos Fluidos Computacional, envolvendo fluidos viscosos, em escoamentos incompressíveis, isotérmicos e não isotérmicos, em regime laminar e turbulento, podendo envolver transporte de massa. Os principais objetivos deste trabalho são a formulação e a aplicação de uma estratégia de adaptação automática de malhas e a inclusão de modelos de viscosidade turbulenta, integrados com um algoritmo utilizado para simular escoamentos de fluidos viscosos bi e tridimensionais, no contexto de malhas não estruturadas. O estudo é dirigido no sentido de aumentar o conhecimento a respeito das estruturas de escoamentos turbulentos e de estudar os efeitos físicos no transporte de quantidades escalares propiciando, através de técnicas de adaptação automática de malhas, a obtenção de soluções numéricas precisas a um custo computacional otimizado. As equações de conservação de massa, de balanço de quantidade de movimento e de quantidade escalar filtradas são utilizadas para simular as grandes escalas de escoamentos turbulentos e, para representar as escalas submalha, são utilizados dois modelos de viscosidade turbulenta: o modelo de Smagorinsky clássico e o modelo dinâmico. Para obter soluções numéricas com precisão, é desenvolvida e implementada uma estratégia de adaptação automática de malhas, a qual é realizada simultaneamente e interativamente com a obtenção da solução. O estudo do comportamento da solução numérica é fundamentado em indicadores de erro, com o propósito de mapear as regiões onde certos fenômenos físicos do escoamento ocorrem com maior intensidade e de aplicar nestas regiões um esquema de adaptação de malhas. A adaptação é constituída por processos de refinamento/desrefinamento e por um processo de suavização laplaciana. Os procedimentos para a implementação dos modelos de viscosidade turbulenta e a estratégia de adaptação automática de malhas são incorporados ao código computacional de elementos finitos tridimensionais, o qual utiliza elementos tetraédricos lineares. Aplicações de escoamentos de fluidos viscosos, incompressíveis, isotérmicos e não isotérmicos em regime laminar e turbulento são simuladas e os resultados são apresentados e comparados com os obtidos numérica ou experimentalmente por outros autores.
Resumo:
A transição à turbulência em uma camada de mistura estavelmente estratificada é de grande interesse para uma variedade de problemas geofísicos e de engenharia. Esta transição é controlada pela competição entre o cisalhamento do escoamento de base e as forças de empuxo, devido à estratificação em densidade do ambiente. Os efeitos do empuxo atuam no escoamento reduzindo a taxa de crescimento das perturbações e retardando a transição à turbulência, enquanto o cisalhamento fornece energia cinética ao escoamento. O presente trabalho investiga a natureza da transição à turbulência em uma camada de mistura temporal estavelmente estratificada através de Simulação Numérica Direta (DNS) e Simulação de Grandes Escalas (LES). O propósito da investigação é analisar o efeito da estratificação estável no desenvolvimento da instabilidade de Kelvin-Helmholtz (K-H) e na formação dos vórtices longitudinais, que se formam após a saturação dos turbilhões primários de K-H. Além deste propósito, é examinado, utilizando de DNS e LES, o desenvolvimento das instabilidades secundárias de K-H na camada baroclínica. Os testes numéricos tridimensionais são realizados com diferentes tipos de condições iniciais para a flutuação de velocidade transversal, enquanto uma condição forçada é usada para as outras duas componentes de flutuação de velocidade. Em particular, o efeito do comprimento transversal do domínio de cálculo é testado empregando diferentes comprimentos, enquanto são usadas as mesmas dimensões para a direção longitudinal e vertical. As simulações bidimensionais mostram que o aumento da estratificação inibe o processo de emparelhamento, reduz a troca de energia entre os turbilhões de K-H e o escoamento, atenua a instabilidade de K-H e diminui o fluxo vertical de massa. A instabilidade secundária do tipo K-H é identtificada na camada baroclínica para Re ¸ 500 quando há o processo de emparelhamento dos vórtices simulados. Na simulação a Re = 500, a instabilidade secundária de K-H aparece tanto para Ri = 0.07 (fraca estratificação) como para Ri = 0.167 (forte estratificação). Os resultados tridimensionais demonstram que os vórtices longitudinais são claramente formados na camada a Ri = 0. Por outro lado, nos casos estratificados os vórtices são enfraquecidos, devido ao gradiente longitudinal de densidade, que diminui a vorticidade nos turbilhões de K-H enquanto aumenta na região entre eles.
Resumo:
Outliers são observações que parecem ser inconsistentes com as demais. Também chamadas de valores atípicos, extremos ou aberrantes, estas inconsistências podem ser causadas por mudanças de política ou crises econômicas, ondas inesperadas de frio ou calor, erros de medida ou digitação, entre outras. Outliers não são necessariamente valores incorretos, mas, quando provenientes de erros de medida ou digitação, podem distorcer os resultados de uma análise e levar o pesquisador à conclusões equivocadas. O objetivo deste trabalho é estudar e comparar diferentes métodos para detecção de anormalidades em séries de preços do Índice de Preços ao Consumidor (IPC), calculado pelo Instituto Brasileiro de Economia (IBRE) da Fundação Getulio Vargas (FGV). O IPC mede a variação dos preços de um conjunto fixo de bens e serviços componentes de despesas habituais das famílias com nível de renda situado entre 1 e 33 salários mínimos mensais e é usado principalmente como um índice de referência para avaliação do poder de compra do consumidor. Além do método utilizado atualmente no IBRE pelos analistas de preços, os métodos considerados neste estudo são: variações do Método do IBRE, Método do Boxplot, Método do Boxplot SIQR, Método do Boxplot Ajustado, Método de Cercas Resistentes, Método do Quartil, do Quartil Modificado, Método do Desvio Mediano Absoluto e Algoritmo de Tukey. Tais métodos foram aplicados em dados pertencentes aos municípios Rio de Janeiro e São Paulo. Para que se possa analisar o desempenho de cada método, é necessário conhecer os verdadeiros valores extremos antecipadamente. Portanto, neste trabalho, tal análise foi feita assumindo que os preços descartados ou alterados pelos analistas no processo de crítica são os verdadeiros outliers. O Método do IBRE é bastante correlacionado com os preços alterados ou descartados pelos analistas. Sendo assim, a suposição de que os preços alterados ou descartados pelos analistas são os verdadeiros valores extremos pode influenciar os resultados, fazendo com que o mesmo seja favorecido em comparação com os demais métodos. No entanto, desta forma, é possível computar duas medidas através das quais os métodos são avaliados. A primeira é a porcentagem de acerto do método, que informa a proporção de verdadeiros outliers detectados. A segunda é o número de falsos positivos produzidos pelo método, que informa quantos valores precisaram ser sinalizados para um verdadeiro outlier ser detectado. Quanto maior for a proporção de acerto gerada pelo método e menor for a quantidade de falsos positivos produzidos pelo mesmo, melhor é o desempenho do método. Sendo assim, foi possível construir um ranking referente ao desempenho dos métodos, identificando o melhor dentre os analisados. Para o município do Rio de Janeiro, algumas das variações do Método do IBRE apresentaram desempenhos iguais ou superiores ao do método original. Já para o município de São Paulo, o Método do IBRE apresentou o melhor desempenho. Em trabalhos futuros, espera-se testar os métodos em dados obtidos por simulação ou que constituam bases largamente utilizadas na literatura, de forma que a suposição de que os preços descartados ou alterados pelos analistas no processo de crítica são os verdadeiros outliers não interfira nos resultados.
Resumo:
Natural ventilation is an efficient bioclimatic strategy, one that provides thermal comfort, healthful and cooling to the edification. However, the disregard for quality environment, the uncertainties involved in the phenomenon and the popularization of artificial climate systems are held as an excuse for those who neglect the benefits of passive cooling. The unfamiliarity with the concept may be lessened if ventilation is observed in every step of the project, especially in the initial phase in which decisions bear a great impact in the construction process. The tools available in order to quantify the impact of projected decisions consist basically of the renovation rate calculations or computer simulations of fluids, commonly dubbed CFD, which stands for Computational Fluid Dynamics , both somewhat apart from the project s execution and unable to adapt for use in parametric studies. Thus, we chose to verify, through computer simulation, the representativeness of the results with a method of simplified air reconditioning rate calculation, as well as making it more compatible with the questions relevant to the first phases of the project s process. The case object consists of a model resulting from the recommendations of the Código de Obras de Natal/ RN, customized according to the NBR 15220. The study has shown the complexity in aggregating a CFD tool to the process and the need for a method capable of generating data at the compatible rate to the flow of ideas and are discarded during the project s development. At the end of our study, we discuss the necessary concessions for the realization of simulations, the applicability and the limitations of both the tools used and the method adopted, as well as the representativeness of the results obtained
Resumo:
This Masters Degree dissertation seeks to make a comparative study of internal air temperature data, simulated through the thermal computer application DesignBuilder 1.2, and data registered in loco through HOBO® Temp Data Logger, in a Social Housing Prototype (HIS), located at the Central Campus of the Federal University of Rio Grande do Norte UFRN. The prototype was designed and built seeking strategies of thermal comfort recommended for the local climate where the study was carried out, and built with panels of cellular concrete by Construtora DoisA, a collaborator of research project REPESC Rede de Pesquisa em Eficiência Energética de Sistemas Construtivos (Research Network on Energy Efficiency of Construction Systems), an integral part of Habitare program. The methodology employed carefully examined the problem, reviewed the bibliography, analyzing the major aspects related to computer simulations for thermal performance of buildings, such as climate characterization of the region under study and users thermal comfort demands. The DesignBuilder 1.2 computer application was used as a simulation tool, and theoretical alterations were carried out in the prototype, then they were compared with the parameters of thermal comfort adopted, based on the area s current technical literature. Analyses of the comparative studies were performed through graphical outputs for a better understanding of air temperature amplitudes and thermal comfort conditions. The data used for the characterization of external air temperature were obtained from the Test Reference Year (TRY), defined for the study area (Natal-RN). Thus the author also performed comparative studies for TRY data registered in the years 2006, 2007 and 2008, at weather station Davis Precision Station, located at the Instituto Nacional de Pesquisas Espaciais INPE-CRN (National Institute of Space Research), in a neighboring area of UFRN s Central Campus. The conclusions observed from the comparative studies performed among computer simulations, and the local records obtained from the studied prototype, point out that the simulations performed in naturally ventilated buildings is quite a complex task, due to the applications limitations, mainly owed to the complexity of air flow phenomena, the influence of comfort conditions in the surrounding areas and climate records. Lastly, regarding the use of the application DesignBuilder 1.2 in the present study, one may conclude that it is a good tool for computer simulations. However, it needs some adjustments to improve reliability in its use. There is a need for continued research, considering the dedication of users to the prototype, as well as the thermal charges of the equipment, in order to check sensitivity
Resumo:
The assessment of building thermal performance is often carried out using HVAC energy consumption data, when available, or thermal comfort variables measurements, for free-running buildings. Both types of data can be determined by monitoring or computer simulation. The assessment based on thermal comfort variables is the most complex because it depends on the determination of the thermal comfort zone. For these reasons, this master thesis explores methods of building thermal performance assessment using variables of thermal comfort simulated by DesignBuilder software. The main objective is to contribute to the development of methods to support architectural decisions during the design process, and energy and sustainable rating systems. The research method consists on selecting thermal comfort methods, modeling them in electronic sheets with output charts developed to optimize the analyses, which are used to assess the simulation results of low cost house configurations. The house models consist in a base case, which are already built, and changes in thermal transmittance, absorptance, and shading. The simulation results are assessed using each thermal comfort method, to identify the sensitivity of them. The final results show the limitations of the methods, the importance of a method that considers thermal radiance and wind speed, and the contribution of the chart proposed
Resumo:
Due to reservoirs complexity and significantly large reserves, heavy oil recovery has become one of the major oil industry challenges. Thus, thermal methods have been widely used as a strategic method to improve heavy oil recovery. These methods improve oil displacement through viscosity reduction, enabling oil production in fields which are not considered commercial by conventional recovery methods. Among the thermal processes, steam flooding is the most used today. One consequence in this process is gravity segregation, given by difference between reservoir and injected fluids density. This phenomenon may be influenced by the presence of reservoir heterogeneities. Since most of the studies are carried out in homogeneous reservoirs, more detailed studies of heterogeneities effects in the reservoirs during steam flooding are necessary, since most oil reservoirs are heterogeneous. This paper presents a study of reservoir heterogeneities and their influence in gravity segregation during steam flooding process. In this study some heterogeneous reservoirs with physical characteristics similar those found in the Brazilian Northeast Basin were analyzed. To carry out the simulations, it was used the commercial simulator STARS by CMG (Computer Modeling Group) - version 2007.11. Heterogeneities were modeled with lower permeability layers. Results showed that the presence of low permeability barriers can improve the oil recovery, and reduce the effects of gravity segregation, depending on the location of heterogeneities. The presence of these barriers have also increased the recovered fraction even with the reduction of injected steam rate
Resumo:
In Brazil and around the world, oil companies are looking for, and expected development of new technologies and processes that can increase the oil recovery factor in mature reservoirs, in a simple and inexpensive way. So, the latest research has developed a new process called Gas Assisted Gravity Drainage (GAGD) which was classified as a gas injection IOR. The process, which is undergoing pilot testing in the field, is being extensively studied through physical scale models and core-floods laboratory, due to high oil recoveries in relation to other gas injection IOR. This process consists of injecting gas at the top of a reservoir through horizontal or vertical injector wells and displacing the oil, taking advantage of natural gravity segregation of fluids, to a horizontal producer well placed at the bottom of the reservoir. To study this process it was modeled a homogeneous reservoir and a model of multi-component fluid with characteristics similar to light oil Brazilian fields through a compositional simulator, to optimize the operational parameters. The model of the process was simulated in GEM (CMG, 2009.10). The operational parameters studied were the gas injection rate, the type of gas injection, the location of the injector and production well. We also studied the presence of water drive in the process. The results showed that the maximum vertical spacing between the two wells, caused the maximum recovery of oil in GAGD. Also, it was found that the largest flow injection, it obtained the largest recovery factors. This parameter controls the speed of the front of the gas injected and determined if the gravitational force dominates or not the process in the recovery of oil. Natural gas had better performance than CO2 and that the presence of aquifer in the reservoir was less influential in the process. In economic analysis found that by injecting natural gas is obtained more economically beneficial than CO2
Resumo:
This work aims presenting the development of a model and computer simulation of a sucker rod pumping system. This system take into account the well geometry, the flow through the tubing, the dynamic behavior of the rod string and the use of a induction motor model. The rod string were modeled using concentrated parameters, allowing the use of ordinary differential equations systems to simulate it s behavior
Resumo:
The world has many types of oil that have a range of values of density and viscosity, these are characteristics to identify whether an oil is light, heavy or even ultraheavy. The occurrence of heavy oil has increased significantly and pointing to a need for greater investment in the exploitation of deposits and therefore new methods to recover that oil. There are economic forecasts that by 2025, the heavy oil will be the main source of fossil energy in the world. One such method is the use of solvent vaporized VAPEX which is known as a recovery method which consists of two horizontal wells parallel to each other, with a gun and another producer, which uses as an injection solvent that is vaporized in order to reduce the viscosity of oil or bitumen, facilitating the flow to the producing well. This method was proposed by Dr. Roger Butler, in 1991. The importance of this study is to analyze how the influence some operational reservoir and parameters are important in the process VAPEX, such as accumulation of oil produced in the recovery factor in flow injection and production rate. Parameters such as flow injection, spacing between wells, type of solvent to be injected, vertical permeability and oil viscosity were addressed in this study. The results showed that the oil viscosity is the parameter that showed statistically significant influence, then the choice of Heptane solvent to be injected showed a greater recovery of oil compared to other solvents chosen, considering the spacing between the wells was shown that for a greater distance between the wells to produce more oil
Resumo:
The petroleum production pipeline networks are inherently complex, usually decentralized systems. Strict operational constraints are applied in order to prevent serious problems like environmental disasters or production losses. This paper describes an intelligent system to support decisions in the operation of these networks, proposing a staggering for the pumps of transfer stations that compose them. The intelligent system is formed by blocks which interconnect to process the information and generate the suggestions to the operator. The main block of the system uses fuzzy logic to provide a control based on rules, which incorporate knowledge from experts. Tests performed in the simulation environment provided good results, indicating the applicability of the system in a real oil production environment. The use of the stagger proposed by the system allows a prioritization of the transfer in the network and a flow programming
Resumo:
The present study provides a methodology that gives a predictive character the computer simulations based on detailed models of the geometry of a porous medium. We using the software FLUENT to investigate the flow of a viscous Newtonian fluid through a random fractal medium which simplifies a two-dimensional disordered porous medium representing a petroleum reservoir. This fractal model is formed by obstacles of various sizes, whose size distribution function follows a power law where exponent is defined as the fractal dimension of fractionation Dff of the model characterizing the process of fragmentation these obstacles. They are randomly disposed in a rectangular channel. The modeling process incorporates modern concepts, scaling laws, to analyze the influence of heterogeneity found in the fields of the porosity and of the permeability in such a way as to characterize the medium in terms of their fractal properties. This procedure allows numerically analyze the measurements of permeability k and the drag coefficient Cd proposed relationships, like power law, for these properties on various modeling schemes. The purpose of this research is to study the variability provided by these heterogeneities where the velocity field and other details of viscous fluid dynamics are obtained by solving numerically the continuity and Navier-Stokes equations at pore level and observe how the fractal dimension of fractionation of the model can affect their hydrodynamic properties. This study were considered two classes of models, models with constant porosity, MPC, and models with varying porosity, MPV. The results have allowed us to find numerical relationship between the permeability, drag coefficient and the fractal dimension of fractionation of the medium. Based on these numerical results we have proposed scaling relations and algebraic expressions involving the relevant parameters of the phenomenon. In this study analytical equations were determined for Dff depending on the geometrical parameters of the models. We also found a relation between the permeability and the drag coefficient which is inversely proportional to one another. As for the difference in behavior it is most striking in the classes of models MPV. That is, the fact that the porosity vary in these models is an additional factor that plays a significant role in flow analysis. Finally, the results proved satisfactory and consistent, which demonstrates the effectiveness of the referred methodology for all applications analyzed in this study.
Resumo:
The pumping through progressing cavities system has been more and more employed in the petroleum industry. This occurs because of its capacity of elevation of highly viscous oils or fluids with great concentration of sand or other solid particles. A Progressing Cavity Pump (PCP) consists, basically, of a rotor - a metallic device similar to an eccentric screw, and a stator - a steel tube internally covered by a double helix, which may be rigid or deformable/elastomeric. In general, it is submitted to a combination of well pressure with the pressure generated by the pumping process itself. In elastomeric PCPs, this combined effort compresses the stator and generates, or enlarges, the clearance existing between the rotor and the stator, thus reducing the closing effect between their cavities. Such opening of the sealing region produces what is known as fluid slip or slippage, reducing the efficiency of the PCP pumping system. Therefore, this research aims to develop a transient three-dimensional computational model that, based on single-lobe PCP kinematics, is able to simulate the fluid-structure interaction that occurs in the interior of metallic and elastomeric PCPs. The main goal is to evaluate the dynamic characteristics of PCP s efficiency based on detailed and instantaneous information of velocity, pressure and deformation fields in their interior. To reach these goals (development and use of the model), it was also necessary the development of a methodology for generation of dynamic, mobile and deformable, computational meshes representing fluid and structural regions of a PCP. This additional intermediary step has been characterized as the biggest challenge for the elaboration and running of the computational model due to the complex kinematic and critical geometry of this type of pump (different helix angles between rotor and stator as well as large length scale aspect ratios). The processes of dynamic generation of meshes and of simultaneous evaluation of the deformations suffered by the elastomer are fulfilled through subroutines written in Fortan 90 language that dynamically interact with the CFX/ANSYS fluid dynamic software. Since a structural elastic linear model is employed to evaluate elastomer deformations, it is not necessary to use any CAE package for structural analysis. However, an initial proposal for dynamic simulation using hyperelastic models through ANSYS software is also presented in this research. Validation of the results produced with the present methodology (mesh generation, flow simulation in metallic PCPs and simulation of fluid-structure interaction in elastomeric PCPs) is obtained through comparison with experimental results reported by the literature. It is expected that the development and application of such a computational model may provide better details of the dynamics of the flow within metallic and elastomeric PCPs, so that better control systems may be implemented in the artificial elevation area by PCP