908 resultados para Flow injection analysis
Resumo:
Waterflooding is a technique largely applied in the oil industry. The injected water displaces oil to the producer wells and avoid reservoir pressure decline. However, suspended particles in the injected water may cause plugging of pore throats causing formation damage (permeability reduction) and injectivity decline during waterflooding. When injectivity decline occurs it is necessary to increase the injection pressure in order to maintain water flow injection. Therefore, a reliable prediction of injectivity decline is essential in waterflooding projects. In this dissertation, a simulator based on the traditional porous medium filtration model (including deep bed filtration and external filter cake formation) was developed and applied to predict injectivity decline in perforated wells (this prediction was made from history data). Experimental modeling and injectivity decline in open-hole wells is also discussed. The injectivity of modeling showed good agreement with field data, which can be used to support plan stimulation injection wells
Resumo:
In Brazilian Northeast there are reservoirs with heavy oil, which use steam flooding as a recovery method. This process allows to reduce oil viscosity, increasing its mobility and consequently its oil recovery. Steam injection is a thermal method and can occurs in continues or cyclic form. Cyclic steam stimulation (CSS) can be repeated several times. Each cycle consisting of three stages: steam injection, soaking time and production phase. CSS becomes less efficient with an increase of number of cycles. Thus, this work aims to study the influence of compositional models in cyclic steam injection and the effects of some parameters, such like: flow injection, steam quality and temperature of steam injected, analyzing the influence of pseudocomponents numbers on oil rate, cumulative oil, oil recovery and simulation time. In the situations analyzed was compared the model of fluid of three phases and three components known as Blackoil . Simulations were done using commercial software (CMG), it was analyzed a homogeneous reservoir with characteristics similar to those found in Brazilian Northeast. It was observed that an increase of components number, increase the time spent in simulation. As for analyzed parameters, it appears that the steam rate, and steam quality has influence on cumulative oil and oil recovery. The number of components did not a lot influenced on oil recovery, however it has influenced on gas production
Resumo:
The world has many types of oil that have a range of values of density and viscosity, these are characteristics to identify whether an oil is light, heavy or even ultraheavy. The occurrence of heavy oil has increased significantly and pointing to a need for greater investment in the exploitation of deposits and therefore new methods to recover that oil. There are economic forecasts that by 2025, the heavy oil will be the main source of fossil energy in the world. One such method is the use of solvent vaporized VAPEX which is known as a recovery method which consists of two horizontal wells parallel to each other, with a gun and another producer, which uses as an injection solvent that is vaporized in order to reduce the viscosity of oil or bitumen, facilitating the flow to the producing well. This method was proposed by Dr. Roger Butler, in 1991. The importance of this study is to analyze how the influence some operational reservoir and parameters are important in the process VAPEX, such as accumulation of oil produced in the recovery factor in flow injection and production rate. Parameters such as flow injection, spacing between wells, type of solvent to be injected, vertical permeability and oil viscosity were addressed in this study. The results showed that the oil viscosity is the parameter that showed statistically significant influence, then the choice of Heptane solvent to be injected showed a greater recovery of oil compared to other solvents chosen, considering the spacing between the wells was shown that for a greater distance between the wells to produce more oil
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.
Resumo:
Climate change, intensive use, and population growth are threatening the availability of water resources. New sources of water, better knowledge of existing ones, and improved water management strategies are of paramount importance. Ground water is often considered as primary water source due to its advantages in terms of quantity, spatial distribution, and natural quality. Remote sensing techniques afford scientists a unique opportunity to characterize landscapes in order to assess groundwater resources, particularly in tectonically influenced areas. Aquifers in volcanic basins are considered the most productive aquifers in Latin America. Although topography is considered the primary driving force for groundwater flows in mountainous terrains, tectonic activity increases the complexity of these groundwater systems by altering the integrity of sedimentary rock units and the overlying drainage networks. Structural controls affect the primary hydraulic properties of the rock formations by developing barriers to flow in some cases and zones of preferential infiltration and subterranean in others. The study area focuses on the Quito Aquifer System (QAS) in Ecuador. The characterization of the hydrogeology started with a lineament analysis based on a combined remote sensing and digital terrain analysis approach. The application of classical tools for regional hydrogeological evaluation and shallow geophysical methods were useful to evaluate the impact of faulting and fracturing on the aquifer system. Given the spatial extension of the area and the complexity of the system, two levels of analysis were applied in this study. At the regional level, a lineament map was created for the QAS. Relationships between fractures, faults and lineaments and the configuration of the groundwater flow on the QAS were determined. At the local level, on the Plateaus region of the QAS, a detailed lineament map was obtained by using high-spatial-resolution satellite imagery and aspect map derived from a digital elevation model (DEM). This map was complemented by the analysis of morphotectonic indicators and shallow geophysics that characterize fracture patterns. The development of the groundwater flow system was studied, drawing upon data pertaining to the aquifer system physical characteristics and topography. Hydrochemistry was used to ascertain the groundwater evolution and verify the correspondence of the flow patterns proposed in the flow system analysis. Isotopic analysis was employed to verify the origin of groundwater. The results of this study show that tectonism plays a very important role for the hydrology of the QAS. The results also demonstrate that faults influence a great deal of the topographic characteristics of the QAS and subsequently the configuration of the groundwater flow. Moreover, for the Plateaus region, the results demonstrate that the aquifer flow systems are affected by secondary porosity. This is a new conceptualization of the functioning of the aquifers on the QAS that will significantly contribute to the development of better strategies for the management of this important water resource.
Resumo:
Numerically computed engine performance of a nominally two-dimensional radical farming scramjet with porous (permeable C/C ceramic) and porthole fuel injection is presented. Inflow conditions with Mach number, stagnation pressure, and enthalpy of 6.44, 40.2MPa, and 4.31 MJ/kg respectively, and fuel/air equivalence ratio of 0.44 were maintained, along with engine geometry. Hydrogen fuel was injected at an axial location of 92.33mm downstream of the leading edge for each investigated injection method. Results from this study show that porous fuel injection results in enhanced mixing and combustion compared to porthole fuel injection. This is particularly evident within the first half of the combustion chamber where porous fuel injection resulted in mixing and combustion efficiencies of 76% and 63% respectively. At the same location, porthole fuel injection resulted in efficiencies respectively of 58% and 46%. Key mechanisms contributing to the observed improved performance were the formation of an attached oblique fuel injection shock and associated stronger shock-expansion train ingested by the engine, enhanced spreading of the fuel in all directions and a more rapidly growing mixing layer.
Resumo:
An analysis of inviscid incompressible flow in a tube of sinusoidally perturbed circular cross section with wall injection has been made. The velocity and pressure fields have been obtained. Measurements of axial velocity profiles and pressure distribution have been made in a simulated star shaped tube with wall injection. The static pressure at the star recess is found to be more than that at the star point, this feature being in conformity with the analytical result. Flow visualisation by photography of injected smoke seems to show simple diffusion rather than strong vortices in the recess.
Resumo:
The complex three-dimensional flowfield produced by secondary injection of hot gases in a rocket nozzle for thrust vector control is analyzed by solving unsteady three-dimensional Euler equations with appropriate boundary conditions. Various system performance parameters like secondary jet amplification factor and axial thrust augmentation are deduced by integrating the nozzle wall pressure distributions obtained as part of the flowfield solution and compared with measurements taken in actual static tests. The agreement is good within the practical range of secondary injectant flow rates for thrust vector control applications.
Resumo:
Heat and mass transfer studies in a calandria based reactor is quite complex both due to geometry and due to the complex mixing flow. It is challenging to devise optimum operating conditions with efficient but safe working range for such a complex configuration. Numerical study known to be very effective is taken up for investigation. In the present study a 3D RANS code with turbulence model has been used to compute the flow fields and to get the heat transfer characteristics to understand certain design parameters of engineering importance. The angle of injection and of the coolant liquid has a large effect on the heat transfer within the reactor.
Resumo:
Inexpensive and permanently modified poly(methyl methacrylate)(PMMA) microchips were fabricated by an injection-molding process. A novel sealing method for plastic microchips at room temperature was introduced. Run-to-run and chip-to-chip reproducibility was good, with relative standard deviation values between 1-3% for the run-to-run and less than 2.1% for the chip-to-chip comparisons. Acrylonitrile-butadiene-styrene (ABS) was used as an additive in PMMA substrates. The proportions of PMMA and ABS were optimized. ABS may be considered as a modifier, which obviously improved some characteristics of the microchip, such as the hydrophilicity and the electro-osmotic flow (EOF). The detection limit of Rhodamine 6G dye for the modified microchip on the home-made microchip analyzer showed a dramatic 100-fold improvement over that for the unmodified PMMA chip. A detection limit of the order of 10(-20) mole has been achieved for each injected phiX-174/HaeIII DNA fragment with the baseline separation between 271 and 281 bp, and fast separation of 11 DNA restriction fragments within 180 seconds. Analysis of a PCR product from the tobacco ACT gene was performed on the modified microchip as an application example.
Resumo:
This paper describes the automation of a fully electrochemical system for preconcentration, cleanup, separation and detection, comprising the hyphenation of a thin layer electrochemical flow cell with CE coupled with contactless conductivity detection (CE-C(4)D). Traces of heavy metal ions were extracted from the pulsed-flowing sample and accumulated on a glassy carbon working electrode by electroreduction for some minutes. Anodic stripping of the accumulated metals was synchronized with hydrodynamic injection into the capillary. The effect of the angle of the slant polished tip of the CE capillary and its orientation against the working electrode in the electrochemical preconcentration (EPC) flow cell and of the accumulation time were studied, aiming at maximum CE-C(4)D signal enhancement. After 6 min of EPC, enhancement factors close to 50 times were obtained for thallium, lead, cadmium and copper ions, and about 16 for zinc ions. Limits of detection below 25 nmol/L were estimated for all target analytes but zinc. A second separation dimension was added to the CE separation capabilities by staircase scanning of the potentiostatic deposition and/or stripping potentials of metal ions, as implemented with the EPC-CE-C(4)D flow system. A matrix exchange between the deposition and stripping steps, highly valuable for sample cleanup, can be straightforwardly programmed with the multi-pumping flow management system. The automated simultaneous determination of the traces of five accumulable heavy metals together with four non-accumulated alkaline and alkaline earth metals in a single run was demonstrated, to highlight the potentiality of the system.
Resumo:
The three-dimensional wall-bounded open cavity may be considered as a simplified geometry found in industrial applications such as leading gear or slotted flats on the airplane. Understanding the three-dimensional complex flow structure that surrounds this particular geometry is therefore of major industrial interest. At the light of the remarkable former investigations in this kind of flows, enough evidences suggest that the lateral walls have a great influence on the flow features and hence on their instability modes. Nevertheless, even though there is a large body of literature on cavity flows, most of them are based on the assumption that the flow is two-dimensional and spanwise-periodic. The flow over realistic open cavity should be considered. This thesis presents an investigation of three-dimensional wall-bounded open cavity with geometric ratio 6:2:1. To this aim, three-dimensional Direct Numerical Simulation (DNS) and global linear instability have been performed. Linear instability analysis reveals that the onset of the first instability in this open cavity is around Recr 1080. The three-dimensional shear layer mode with a complex structure is shown to be the most unstable mode. I t is noteworthy that the flow pattern of this high-frequency shear layer mode is similar to the observed unstable oscillations in supercritical unstable case. DNS of the cavity flow carried out at different Reynolds number from steady state until a nonlinear saturated state is obtained. The comparison of time histories of kinetic energy presents a clearly dominant energetic mode which shifts between low-frequency and highfrequency oscillation. A complete flow patterns from subcritical cases to supercritical case has been put in evidence. The flow structure at the supercritical case Re=1100 resembles typical wake-shedding instability oscillations with a lateral motion existed in the subcritical cases. Also, This flow pattern is similar to the observations in experiments. In order to validate the linear instability analysis results, the topology of the composite flow fields reconstructed by linear superposition of a three-dimensional base flow and its leading three-dimensional global eigenmodes has been studied. The instantaneous wall streamlines of those composited flows display distinguish influence region of each eigenmode. Attention has been focused on the leading high-frequency shear layer mode; the composite flow fields have been fully recognized with respect to the downstream wave shedding. The three-dimensional shear layer mode is shown to give rise to a typical wake-shedding instability with a lateral motions occurring downstream which is in good agreement with the experiment results. Moreover, the spanwise-periodic, open cavity with the same length to depth ratio has been also studied. The most unstable linear mode is different from the real three-dimensional cavity flow, because of the existence of the side walls. Structure sensitivity of the unstable global mode is analyzed in the flow control context. The adjoint-based sensitivity analysis has been employed to localized the receptivity region, where the flow is more sensible to momentum forcing and mass injection. Because of the non-normality of the linearized Navier-Stokes equations, the direct and adjoint field has a large spatial separation. The strongest sensitivity region is locate in the upstream lip of the three-dimensional cavity. This numerical finding is in agreement with experimental observations. Finally, a prototype of passive flow control strategy is applied.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Two dimensional flow of a micropolar fluid in a porous channel is investigated. The flow is driven by suction or injection at the channel walls, and the micropolar model due to Eringen is used to describe the working fluid. An extension of Berman's similarity transform is used to reduce the governing equations to a set of non-linear coupled ordinary differential equations. The latter are solved for large mass transfer via a perturbation analysis where the inverse of the cross-flow Reynolds number is used as the perturbing parameter. Complementary numerical solutions for strong injection are also obtained using a quasilinearisation scheme, and good agreement is observed between the solutions obtained from the perturbation analysis and the computations.