968 resultados para Flow simulation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigated the energy system contributions of rowers in three different conditions: rowing on an ergometer without and with the slide and rowing in the water. For this purpose, eight rowers were submitted to 2,000 m race simulations in each of the situations defined above. The fractions of the aerobic (W(AER)), anaerobic alactic (W(PCR)) and anaerobic lactic (W([La-])) systems were calculated based on the oxygen uptake, the fast component of excess post-exercise oxygen uptake and changes in net blood lactate, respectively. In the water, the metabolic work was significantly higher [(851 (82) kJ] than during both ergometer [674 (60) kJ] and ergometer with slide [663 (65) kJ] (P <= 0.05). The time in the water [515 (11) s] was higher (P < 0.001) than in the ergometers with [398 (10) s] and without the slide [402 (15) s], resulting in no difference when relative energy expenditure was considered: in the water [99 (9) kJ min(-1)], ergometer without the slide [99.6 (9) kJ min(-1)] and ergometer with the slide [100.2 (9.6) kJ min(-1)]. The respective contributions of the WAER, WPCR and W[La-] systems were water = 87 (2), 7 (2) and 6 (2)%, ergometer = 84 (2), 7 (2) and 9 (2)%, and ergometer with the slide = 84 (2), 7 (2) and 9 (1)%. (V) over dotO(2), HR and lactate were not different among conditions. These results seem to indicate that the ergometer braking system simulates conditions of a bigger and faster boat and not a single scull. Probably, a 2,500 m test should be used to properly simulate in the water single-scull race.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The crossflow filtration process differs of the conventional filtration by presenting the circulation flow tangentially to the filtration surface. The conventional mathematical models used to represent the process have some limitations in relation to the identification and generalization of the system behaviour. In this paper, a system based on artificial neural networks is developed to overcome the problems usually found in the conventional mathematical models. More specifically, the developed system uses an artificial neural network that simulates the behaviour of the crossflow filtration process in a robust way. Imprecisions and uncertainties associated with the measurements made on the system are automatically incorporated in the neural approach. Simulation results are presented to justify the validity of the proposed approach. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a novel wire-mesh sensor based on permittivity (capacitance) measurements is applied to generate images of the phase fraction distribution and investigate the flow of viscous oil and water in a horizontal pipe. Phase fraction values were calculated from the raw data delivered by the wire-mesh sensor using different mixture permittivity models. Furthermore, these data were validated against quick-closing valve measurements. Investigated flow patterns were dispersion of oil in water (Do/w) and dispersion of oil in water and water in oil (Do/w&w/o). The Maxwell-Garnett mixing model is better suited for Dw/o and the logarithmic model for Do/w&w/o flow pattern. Images of the time-averaged cross-sectional oil fraction distribution along with axial slice images were used to visualize and disclose some details of the flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present study, quasi-diabatic two-phase flow pattern visualizations and measurements of elongated bubble velocity, frequency and length were performed. The tests were run for R134a and R245fa evaporating in a stainless steel tube with diameter of 2.32 mm, mass velocities ranging from 50 to 600 kg/m(2) s and saturation temperatures of 22 degrees C, 31 degrees C and 41 degrees C. The tube was heated by applying a direct DC current to its surface. Images from a high-speed video-camera (8000 frames/s) obtained through a transparent tube just downstream the heated sections were used to identify the following flow patterns: bubbly, elongated bubbles, churn and annular flows. The visualized flow patterns were compared against the predictions provided by Barnea et al. (1983) [1], Felcar et al. (2007) [10], Revellin and Thome (2007) [3] and Ong and Thome (2009) [11]. From this comparison, it was found that the methods proposed by Felcar et al. (2007) [10] and Ong and Thome (2009) [1] predicted relatively well the present database. Additionally, elongated bubble velocities, frequencies and lengths were determined based on the analysis of high-speed videos. Results suggested that the elongated bubble velocity depends on mass velocity, vapor quality and saturation temperature. The bubble velocity increases with increasing mass velocity and vapor quality and decreases with increasing saturation temperature. Additionally, bubble velocity was correlated as linear functions of the two-phase superficial velocity. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents new experimental flow boiling heat transfer results in micro-scale tubes. The experimental data were obtained in a horizontal 2.3 mm I.D stainless steel tube with heating length of 464 mm, R134a and R245fa as working fluids, mass velocities ranging from 50 to 700 kg m(-2) s(-1), heat flux from 5 to 55 kW m(-2), exit saturation temperatures of 22, 31 and 41 degrees C, and vapor qualities ranging from 0.05 to 0.99. Flow pattern characterization was also performed from images obtained by high-speed filming. Heat transfer coefficient results from 1 to 14 kW m(-2) K(-1) were measured. It was found that the heat transfer coefficient is a strong function of heat flux, mass velocity and vapor quality. The experimental data were compared against ten flow boiling predictive methods from the literature. Liu and Winterton [3], Zhang et al. [5] and Saitoh et al. [6] worked best for both fluids, capturing most of the experimental heat transfer trends. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of the present paper is to thermally characterize a cross-flow heat exchanger featuring a new cross-flow arrangement, which may find application in contemporary refrigeration and automobile industries. The new flow arrangement is peculiar in the sense that it possesses two fluid circuits extending in the form of two tube rows, each with two tube lines. To assess the heat exchanger performance, it is compared against that for the standard two-pass counter-cross-flow arrangement. The two-part comparison is based on the thermal effectiveness and the heat exchanger efficiency for several combinations of the heat capacity rate ratio, C*, and the number of transfer units, NTU. In addition, a third comparison is made in terms of the so-called ""heat exchanger reversibility norm"" (HERN) through the influence of various parameters such as the inlet temperature ratio, T, and the heat capacity rate ratio, C*, for several fixed NTU values. The proposed new flow arrangement delivers higher thermal effectiveness and higher heat exchanger efficiency, resulting in lesser entropy generation over a wide range of C* and NTU values. These metrics are quantified with respect to the arrangement widely used in refrigeration industry due to its high effectiveness, namely, the standard two-pass counter-cross-flow heat exchanger. The new flow arrangement seems to be a promising avenue in situations where cross-flow heat exchangers for single-phase fluid have to be used in refrigeration units. (c) 2009 Elsevier Masson SAS. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the manufacture of tubular ceramic membranes and the study of their performance in the demulsification of soybean oil/water emulsions. The membranes were made by iso-static pressing method and micro and macro structurally characterized by SEM, porosimetry by mercury intrusion and determination of apparent density and porosity. The microfiltration tests were realized on an experimental workbench, and fluid dynamic parameters, such as transmembrane flux and pressure were used to evaluate the process relative to the oil phase concentration (analysed by TOC measurements) in the permeate. The results showed that the membrane with pores` average diameter of 1.36 mu m achieved higher transmembrane flux than the membrane with pores` average diameter of 0.8 mu m. The volume of open pores (responsible for the permeation) was predominant in the total porosity, which was higher than 50% for all tested membranes. Concerning demulsification, the monolayer membranes were efficacious, as the rejection coefficient was higher than 99%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general transition criterion is proposed in order to locate the core-annular flow pattern in horizontal and vertical oil-water flows. It is based on a rigorous one-dimensional two-fluid model of liquid-liquid two-phase flow and considers the existence of critical interfacial wave numbers related to a non-negligible interfacial tension term to which the linear stability theory still applies. The viscous laminar-laminar flow problem is fully resolved and turbulence effects on the stability are analyzed through experimentally obtained shape factors. The proposed general transition criterion includes in its formulation the inviscid Kelvin-Helmholtz`s discriminator. If a theoretical maximum wavelength is considered as a necessary condition for stability, a stability criterion in terms of the Eotvos number is achieved. Effects of interfacial tension, viscosity ratio, density difference, and shape factors on the stability of core-annular flow are analyzed in detail. The more complete modeling allowed for the analysis of the neutral-stability wave number and the results strongly suggest that the interfacial tension term plays an indispensable role in the correct prediction of the stable region of core-annular flow pattern. The incorporation of a theoretical minimum wavelength into the transition model produced significantly better results. The criterion predictions were compared with recent data from the literature and the agreement is encouraging. (C) 2007 American Institute of Chemical Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The machining of hardened steels has always been a great challenge in metal cutting, particularly for drilling operations. Generally, drilling is the machining process that is most difficult to cool due to the tool`s geometry. The aim of this work is to determine the heat flux and the coefficient of convection in drilling using the inverse heat conduction method. Temperature was assessed during the drilling of hardened AISI H13 steel using the embedded thermocouple technique. Dry machining and two cooling/lubrication systems were used, and thermocouples were fixed at distances very close to the hole`s wall. Tests were replicated for each condition, and were carried out with new and worn drills. An analytical heat conduction model was used to calculate the temperature at tool-workpiece interface and to define the heat flux and the coefficient of convection. In all tests using new and worn out drills, the lowest temperatures and decrease of heat flux were observed using the flooded system, followed by the MQL, considering the dry condition as reference. The decrease of temperature was directly proportional to the amount of lubricant applied and was significant in the MQL system when compared to dry cutting. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the traditional permutation flow shop scheduling problem with the objective of minimizing mean flowtime, therefore reducing in-process inventory. A new heuristic method is proposed for the scheduling problem solution. The proposed heuristic is compared with the best one considered in the literature. Experimental results show that the new heuristic provides better solutions regarding both the solution quality and computational effort.