998 resultados para Flow graphs
Resumo:
The mathematical model for two-dimensional unsteady sonic flow, based on the classical diffusion equation with imaginary coefficient, is presented and discussed. The main purpose is to develop a rigorous formulation in order to bring into light the correspondence between the sonic, supersonic and subsonic panel method theory. Source and doublet integrals are obtained and Laplace transformation demonstrates that, in fact, the source integral is the solution of the doublet integral equation. It is shown that the doublet-only formulation reduces to a Volterra integral equation of the first kind and a numerical method is proposed in order to solve it. To the authors' knowledge this is the first reported solution to the unsteady sonic thin airfoil problem through the use of doublet singularities. Comparisons with the source-only formulation are shown for the problem of a flat plate in combined harmonic heaving and pitching motion.
Resumo:
Modelling of the slug structure requires a new effort on fundamental research. To clarify some aspects of the horizontal slug flow, an experimental study of the behaviour of two isolated bubbles in a single-phase liquid flow was performed. This procedure was adopted to avoid the overlap of different phenomena induced by a train of long bubbles. The experimental facility consists of a 90-m horizontal PVC pipe with internal diameter of 0,053 m. The behaviour of two single air bubbles travelling in a water flow was studied. Focus was given on the influence of the distance between the bubbles on the velocity of the second bubble. This study allows the understanding of the mechanism of overtaking that takes place right after the slug formation and that precedes the coalescence of bubbles in a slug flow. The results show that bubbles placed behind a liquid slug smaller than a critical value move faster than the leading one. Otherwise, they move slower than the leading one.
Resumo:
The knowledge of the slug flow characteristics is very important when designing pipelines and process equipment. When the intermittences typical in slug flow occurs, the fluctuations of the flow variables bring additional concern to the designer. Focusing on this subject the present work discloses the experimental data on slug flow characteristics occurring in a large-size, large-scale facility. The results were compared with data provided by mechanistic slug flow models in order to verify their reliability when modelling actual flow conditions. Experiments were done with natural gas and oil or water as the liquid phase. To compute the frequency and velocity of the slug cell and to calculate the length of the elongated bubble and liquid slug one used two pressure transducers measuring the pressure drop across the pipe diameter at different axial locations. A third pressure transducer measured the pressure drop between two axial location 200 m apart. The experimental data were compared with results of Camargo's1 algorithm (1991, 1993), which uses the basics of Dukler & Hubbard's (1975) slug flow model, and those calculated by the transient two-phase flow simulator OLGA.
Resumo:
An experimental apparatus for the study of core annular flows of heavy oil and water at room temperature has been set up and tested at laboratory scale. The test section consists of a 2.75 cm ID galvanized steel pipe. Tap water and a heavy oil (17.6 Pa.s; 963 kg/m³) were used. Pressure drop in a vertical upward test section was accurately measured for oil flow rates in the range 0.297 - 1.045 l/s and water flow rates ranging from 0.063 to 0.315 l/s. The oil-water input ratio was in the range 1-14. The measured pressure drop comprises gravitational and frictional parts. The gravitational pressure drop was expressed in terms of the volumetric fraction of the core, which was determined from a correlation developed by Bannwart (1998b). The existence of an optimum water-oil input ratio for each oil flow rate was observed in the range 0.07 - 0.5. The frictional pressure drop was modeled to account for both hydrodynamic and net buoyancy effects on the core. The model was adjusted to fit our data and shows excellent agreement with data from another source (Bai, 1995).
Resumo:
One of the main problems related to the transport and manipulation of multiphase fluids concerns the existence of characteristic flow patterns and its strong influence on important operation parameters. A good example of this occurs in gas-liquid chemical reactors in which maximum efficiencies can be achieved by maintaining a finely dispersed bubbly flow to maximize the total interfacial area. Thus, the ability to automatically detect flow patterns is of crucial importance, especially for the adequate operation of multiphase systems. This work describes the application of a neural model to process the signals delivered by a direct imaging probe to produce a diagnostic of the corresponding flow pattern. The neural model is constituted of six independent neural modules, each of which trained to detect one of the main horizontal flow patterns, and a last winner-take-all layer responsible for resolving when two or more patterns are simultaneously detected. Experimental signals representing different bubbly, intermittent, annular and stratified flow patterns were used to validate the neural model.
Resumo:
The unsteady, viscous, supersonic flow over a spike-nosed body of revolution is numerically investigated by solving the Navier-Stokes equations. The time-accurate computations are performed employing an implicit algorithm based on the second-order time-accurate LU-SGS scheme with the incorporation of a subiteration procedure to maintain time accuracy. The characteristics of the flow field for a Mach number of 3.0, Reynolds number of 7.87 x 10(6)/m, and angles of attack of 5 and 10 degrees are described. Self-sustained asymmetric shock wave oscillations were observed in the numerical computations for these angles of attack. The main characteristic of the flow field, as well as its influence on drag coefficient is discussed.
Resumo:
Hydraulic head is distributed through a medium with porous aspect. The analysis of hydraulic head from one point to another is used by the Richard's equation. This equation is equivalent to the groundwater ow equation that predicts the volumetric water contents. COMSOL 3.5 is used for computation applying Richard's equation. A rectangle of 100 meters of length and 10 meters of large (depth) with 0,1 m/s fl ux of inlet as source of our fl uid is simulated. The domain have Richards' equation model in two dimension (2D). Hydraulic head increases proportional with moisture content.
Resumo:
In the present work, liquid-solid flow in industrial scale is modeled using the commercial software of Computational Fluid Dynamics (CFD) ANSYS Fluent 14.5. In literature, there are few studies on liquid-solid flow in industrial scale, but any information about the particular case with modified geometry cannot be found. The aim of this thesis is to describe the strengths and weaknesses of the multiphase models, when a large-scale application is studied within liquid-solid flow, including the boundary-layer characteristics. The results indicate that the selection of the most appropriate multiphase model depends on the flow regime. Thus, careful estimations of the flow regime are recommended to be done before modeling. The computational tool is developed for this purpose during this thesis. The homogeneous multiphase model is valid only for homogeneous suspension, the discrete phase model (DPM) is recommended for homogeneous and heterogeneous suspension where pipe Froude number is greater than 1.0, while the mixture and Eulerian models are able to predict also flow regimes, where pipe Froude number is smaller than 1.0 and particles tend to settle. With increasing material density ratio and decreasing pipe Froude number, the Eulerian model gives the most accurate results, because it does not include simplifications in Navier-Stokes equations like the other models. In addition, the results indicate that the potential location of erosion in the pipe depends on material density ratio. Possible sedimentation of particles can cause erosion and increase pressure drop as well. In the pipe bend, especially secondary flows, perpendicular to the main flow, affect the location of erosion.
Resumo:
Plot-scale overland flow experiments were conducted to evaluate the efficiency of streamside management zones (SMZs) in retaining herbicides in runoff generated from silvicultural activities. Herbicide retention was evaluated for five different slopes (2, 5, 10, 15, and 20%), two cover conditions (undisturbed O horizon and raked surface), and two periods under contrasting soil moisture conditions (summer dry and winter wet season) and correlated to O horizon and site conditions. Picloram (highly soluble in water) and atrazine (moderately sorbed into soil particles) at concentrations in the range of 55 and 35 µg L-1 and kaolin clay (approximately 5 g L-1) were mixed with 13.000 liters of water and dispersed over the top of 5 x 10 m forested plots. Surface flow was collected 2, 4, 6, and 10 m below the disperser to evaluate the changes in concentration as it moved through the O horizon and surface soil horizon-mixing zone. Results showed that, on average, a 10 m long forested SMZ removed around 25% of the initial concentration of atrazine and was generally ineffective in reducing the more soluble picloram. Retention of picloram was only 6% of the applied quantity. Percentages of mass reduction by infiltration were 36% for atrazine and 20% for picloram. Stronger relationships existed between O horizon depth and atrazine retention than in any other measured variable, suggesting that better solid-solution contact associated with flow through deeper O horizons is more important than either velocity or soil moisture as a determinant of sorption.
Resumo:
Tämä työ tehtiin Kone Industrial Oy:lle Major Projects yksikköön, laatuosastolle. Kone Major Projects yksikkö keskittyy erikoisiin ja suuriin hissi- ja liukuporras projekteihin. Työn tavoitteena oli luoda harmonisoitu prosessi hissikomponenttien laaduntarkkailua varten sekä tarkastella ja vertailla kustannussäästöjä, jota tällä uudella prosessilla voidaan saavuttaa. Tavoitteena oli saavuttaa 80-prosentin kustannussäästöt laatukustannuksissa uuden laatuprosessin avulla. Työn taustana ja tutkimusongelmana ovat lisääntyneet erikoisprojektit ja niiden myötä lisääntynyt laaduntarkkailun tarve. Ongelmana laaduntarkkailussa voitiin pitää harmonisoidun ja selkeän prosessin puuttumista C-prosessikomponenttien valmistuksessa. Lisäksi kehitysprosessin aikana luotiin vanhojen työkalujen pohjalta keskeinen laaduntarkkailutyökalu, CTQ-työkalu. Työssä käsitellään ensin Konetta yhtiönä ja selvitetään Koneen keskeisimmät prosessit työn taustaksi. Teoria osuudessa käsitellään prosessin kehitykseen liittyviä teorioita sekä yleisiä laatukäsitteitä ja esitetään teorioita laadun asemasta nykypäivänä. Lopuksi käsitellään COQ eli laatukustannusten teoriaa ja esitellään teoria PAF-analyysille, jota käytetään työssä laatukustannusten vertailuun case esimerkin avulla. Työssä kuvataan CTQ prosessin luominen alusta loppuun ja case esimerkin avulla testataan uutta CTQ prosessia pilottihankkeessa. Tässä case esimerkissä projektin bracket eli johdekiinnitysklipsi tuotetaan uuden laatuprosessin avulla sekä tehdään kustannusvertailu saman projektin toisen bracketin kanssa, joka on tuotettu ennen uuden laatuprosessin implementoimista. Työn lopputuloksena CTQ prosessi saatiin luotua ja sitä pystyttiin testaamaan käytännössä case esimerkin avulla. Tulosten perusteella voidaan sanoa, että CTQ prosessin käyttö vähentää laatukustannuksia huomattavasti ja helpottaa laadunhallintaa C-prosessikomponenttien tuotannossa.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Pumping systems account for up to 22 % of the energy consumed by electrical motors in European industry. Many studies have shown that there is also a lot of potential for energy savings in these systems with the improvement of devices, flow control or surrounding sys-tem. The best method for more energy efficient pumping has to be found for each system separately. This thesis studies how energy saving potential in reservoir pumping system is affected by surrounding variables, such as the static head variation and friction factor. The objective is to create generally applicable graphs to quickly compare methods for reducing pumping system’s energy costs. The gained results are several graphs showcasing how the chosen variables affect energy saving potential of the pumping system in one specific case. To judge if these graphs are generally applicable, more testing with different pumps and environments are required.
Resumo:
Systemic blood flow (Q) was measured by echodopplercardiography in 5 normal young adult males during apnea, eupnea and tachypnea. Measurements were made in a recumbent posture at 3-min intervals. Tachypnea was attained by doubling the respiratory frequency at eupnea at a constant tidal volume. An open glottis was maintained during apnea at the resting respiratory level. The Q values were positively correlated with the respiratory frequency, decreasing from eupnea to apnea and increasing from eupnea to tachypnea (P<0.05). These data demonstrate that echodopplercardiography, a better qualified tool for this purpose, confirms the positive and progressive effects of ventilation on systemic blood flow, as suggested by previous studies based on diverse technical approaches
Resumo:
The objective of the present study was to validate the transit-time technique for long-term measurements of iliac and renal blood flow in rats. Flow measured with ultrasonic probes was confirmed ex vivo using excised arteries perfused at varying flow rates. An implanted 1-mm probe reproduced with accuracy different patterns of flow relative to pressure in freely moving rats and accurately quantitated the resting iliac flow value (on average 10.43 ± 0.99 ml/min or 2.78 ± 0.3 ml min-1 100 g body weight-1). The measurements were stable over an experimental period of one week but were affected by probe size (resting flows were underestimated by 57% with a 2-mm probe when compared with a 1-mm probe) and by anesthesia (in the same rats, iliac flow was reduced by 50-60% when compared to the conscious state). Instantaneous changes of iliac and renal flow during exercise and recovery were accurately measured by the transit-time technique. Iliac flow increased instantaneously at the beginning of mild exercise (from 12.03 ± 1.06 to 25.55 ± 3.89 ml/min at 15 s) and showed a smaller increase when exercise intensity increased further, reaching a plateau of 38.43 ± 1.92 ml/min at the 4th min of moderate exercise intensity. In contrast, exercise-induced reduction of renal flow was smaller and slower, with 18% and 25% decreases at mild and moderate exercise intensities. Our data indicate that transit-time flowmetry is a reliable method for long-term and continuous measurements of regional blood flow at rest and can be used to quantitate the dynamic flow changes that characterize exercise and recovery