282 resultados para Potential methods
Resumo:
An updated flow pattern map was developed for CO2 on the basis of the previous Cheng-Ribatski-Wojtan-Thome CO2 flow pattern map [1,2] to extend the flow pattern map to a wider range of conditions. A new annular flow to dryout transition (A-D) and a new dryout to mist flow transition (D-M) were proposed here. In addition, a bubbly flow region which generally occurs at high mass velocities and low vapor qualities was added to the updated flow pattern map. The updated flow pattern map is applicable to a much wider range of conditions: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to +25 degrees C (reduced pressures from 0.21 to 0.87). The updated flow pattern map was compared to independent experimental data of flow patterns for CO2 in the literature and it predicts the flow patterns well. Then, a database of CO2 two-phase flow pressure drop results from the literature was set up and the database was compared to the leading empirical pressure drop models: the correlations by Chisholm [3], Friedel [4], Gronnerud [5] and Muller-Steinhagen and Heck [6], a modified Chisholm correlation by Yoon et al. [7] and the flow pattern based model of Moreno Quiben and Thome [8-10]. None of these models was able to predict the CO2 pressure drop data well. Therefore, a new flow pattern based phenomenological model of two-phase flow frictional pressure drop for CO2 was developed by modifying the model of Moreno Quiben and Thome using the updated flow pattern map in this study and it predicts the CO2 pressure drop database quite well overall. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Corresponding to the updated flow pattern map presented in Part I of this study, an updated general flow pattern based flow boiling heat transfer model was developed for CO2 using the Cheng-Ribatski-Wojtan-Thome [L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside horizontal tubes, Int. J. Heat Mass Transfer 49 (2006) 4082-4094; L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, Erratum to: ""New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside tubes"" [Heat Mass Transfer 49 (21-22) (2006) 4082-4094], Int. J. Heat Mass Transfer 50 (2007) 391] flow boiling heat transfer model as the starting basis. The flow boiling heat transfer correlation in the dryout region was updated. In addition, a new mist flow heat transfer correlation for CO2 was developed based on the CO2 data and a heat transfer method for bubbly flow was proposed for completeness sake. The updated general flow boiling heat transfer model for CO2 covers all flow regimes and is applicable to a wider range of conditions for horizontal tubes: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to 25 degrees C (reduced pressures from 0.21 to 0.87). The updated general flow boiling heat transfer model was compared to a new experimental database which contains 1124 data points (790 more than that in the previous model [Cheng et al., 2006, 2007]) in this study. Good agreement between the predicted and experimental data was found in general with 71.4% of the entire database and 83.2% of the database without the dryout and mist flow data predicted within +/-30%. However, the predictions for the dryout and mist flow regions were less satisfactory due to the limited number of data points, the higher inaccuracy in such data, scatter in some data sets ranging up to 40%, significant discrepancies from one experimental study to another and the difficulties associated with predicting the inception and completion of dryout around the perimeter of the horizontal tubes. (C) 2007 Elsevier Ltd. All rights reserved.
Exploring the potential of functionally graded materials concept for the development of fiber cement
Resumo:
In this study we establish the concept of functionally graded fiber cement. We discuss the use of statistical mixture designs to choose formulations and present ideas for the production of functionally graded fiber cement components for Hatschek machines. The feasibility of producing functionally graded fiber cement by grading PVA fiber content has been experimentally evaluated. Thermogravimetric analysis (TG) was employed to assess fiber distribution profiles and four-point bending tests were applied to evaluate the mechanical performance of both conventional and graded composites. The results show that grading PVA fiber content is an effective way to produce functionally graded fiber cement, which allows for a reduction of the total fiber volume without a significant reduction on modulus of rupture of composite. TG tests were found adequate to assess the fiber content at different points in functionally graded fiber cements. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
For the last decade, elliptic curve cryptography has gained increasing interest in industry and in the academic community. This is especially due to the high level of security it provides with relatively small keys and to its ability to create very efficient and multifunctional cryptographic schemes by means of bilinear pairings. Pairings require pairing-friendly elliptic curves and among the possible choices, Barreto-Naehrig (BN) curves arguably constitute one of the most versatile families. In this paper, we further expand the potential of the BN curve family. We describe BN curves that are not only computationally very simple to generate, but also specially suitable for efficient implementation on a very broad range of scenarios. We also present implementation results of the optimal ate pairing using such a curve defined over a 254-bit prime field. (C) 2001 Elsevier Inc. All rights reserved.
Resumo:
The roots of swarm intelligence are deeply embedded in the biological study of self-organized behaviors in social insects. Particle swarm optimization (PSO) is one of the modern metaheuristics of swarm intelligence, which can be effectively used to solve nonlinear and non-continuous optimization problems. The basic principle of PSO algorithm is formed on the assumption that potential solutions (particles) will be flown through hyperspace with acceleration towards more optimum solutions. Each particle adjusts its flying according to the flying experiences of both itself and its companions using equations of position and velocity. During the process, the coordinates in hyperspace associated with its previous best fitness solution and the overall best value attained so far by other particles within the group are kept track and recorded in the memory. In recent years, PSO approaches have been successfully implemented to different problem domains with multiple objectives. In this paper, a multiobjective PSO approach, based on concepts of Pareto optimality, dominance, archiving external with elite particles and truncated Cauchy distribution, is proposed and applied in the design with the constraints presence of a brushless DC (Direct Current) wheel motor. Promising results in terms of convergence and spacing performance metrics indicate that the proposed multiobjective PSO scheme is capable of producing good solutions.
Resumo:
This paper proposes ail alternative configuration to conventional reverse osmosis (RO) desalination systems by incorporating the use of gravitational potential energy. The proposal suggests a model that can be viewed as the energy station of a RO desalination plant. Conventionally, RO plants use a high-pressure pump, powered by electricity or fossil fuel. The function of the pump is to send a flux of saline water to a group of semi-permeable membrane modules, capable of ""filtering"" the dissolved salts. In this proposed model, we intend to achieve a flux at the inlet of the membrane modules with a pressure high enough for the desalination process, without using, either electricity or fossil fuels. To do this we divised a hybrid system that uses both gravitational potential energy and wind energy. The technical viability of the alternative was theoretically proven by deductions based on physics and mathematics.
Resumo:
This paper presents both the theoretical and the experimental approaches of the development of a mathematical model to be used in multi-variable control system designs of an active suspension for a sport utility vehicle (SUV), in this case a light pickup truck. A complete seven-degree-of-freedom model is successfully quickly identified, with very satisfactory results in simulations and in real experiments conducted with the pickup truth. The novelty of the proposed methodology is the use of commercial software in the early stages of the identification to speed up the process and to minimize the need for a large number of costly experiments. The paper also presents major contributions to the identification of uncertainties in vehicle suspension models and in the development of identification methods using the sequential quadratic programming, where an innovation regarding the calculation of the objective function is proposed and implemented. Results from simulations of and practical experiments with the real SUV are presented, analysed, and compared, showing the potential of the method.
Resumo:
One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on electrical potential measurements at its boundary generated by an imposed electrical current distribution into the boundary. One of the methods used in dynamic estimation is the Kalman filter. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, poor tracking ability of the extended Kalman filter (EKF) is achieved. An analytically developed evolution model is not feasible at this moment. The paper investigates the identification of the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model transition matrix is identified using the history of the estimated resistivity distribution obtained by a sensitivity matrix based algorithm and a Newton-Raphson algorithm. To numerically identify the linear evolution model, the Ibrahim time-domain method is used. The investigation is performed by numerical simulations of a domain with time-varying resistivity and by experimental data collected from the boundary of a human chest during normal breathing. The obtained dynamic resistivity values lie within the expected values for the tissues of a human chest. The EKF results suggest that the tracking ability is significantly improved with this approach.
Resumo:
Safety Instrumented Systems (SIS) are designed to prevent and / or mitigate accidents, avoiding undesirable high potential risk scenarios, assuring protection of people`s health, protecting the environment and saving costs of industrial equipment. The design of these systems require formal methods for ensuring the safety requirements, but according material published in this area, has not identified a consolidated procedure to match the task. This sense, this article introduces a formal method for diagnosis and treatment of critical faults based on Bayesian network (BN) and Petri net (PN). This approach considers diagnosis and treatment for each safety instrumented function (SIF) including hazard and operability (HAZOP) study in the equipment or system under control. It also uses BN and Behavioral Petri net (BPN) for diagnoses and decision-making and the PN for the synthesis, modeling and control to be implemented by Safety Programmable Logic Controller (PLC). An application example considering the diagnosis and treatment of critical faults is presented and illustrates the methodology proposed.
Resumo:
In this paper, processing methods of Fourier optics implemented in a digital holographic microscopy system are presented. The proposed methodology is based on the possibility of the digital holography in carrying out the whole reconstruction of the recorded wave front and consequently, the determination of the phase and intensity distribution in any arbitrary plane located between the object and the recording plane. In this way, in digital holographic microscopy the field produced by the objective lens can be reconstructed along its propagation, allowing the reconstruction of the back focal plane of the lens, so that the complex amplitudes of the Fraunhofer diffraction, or equivalently the Fourier transform, of the light distribution across the object can be known. The manipulation of Fourier transform plane makes possible the design of digital methods of optical processing and image analysis. The proposed method has a great practical utility and represents a powerful tool in image analysis and data processing. The theoretical aspects of the method are presented, and its validity has been demonstrated using computer generated holograms and images simulations of microscopic objects. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
A brief look at the history of fractography has shown a recent trend in the quantification of topographic parameters through the use of three-dimensional reconstruction techniques, which associate SEM stereoscopy and stereophotogrammetry software, allowing the calculation of the elevation measurement at numerous points of the topography due to the parallax that takes place during the tilting of the sample along the microscope eucentric plane. Several investigators have used reconstruction techniques to correlate some fractographic parameters, such as fractal dimension and fractured to projected area ratio, to the mechanical properties of materials, such as fracture toughness and tensile strength. So far, the search for a clear relationship between the fracture topography and mechanical properties has provided ambiguous results. The present work applied a surface metrology software to reconstruct three-dimensionally fracture surfaces (transgranular cleavage, intergranular and dimple fracture), corrosion pits and tribo-surfaces in order to explore the potential of this stereophotogrammetry technique. The existence of a variation in the calculated topographic parameters with the conditions of SEM image acquisition reinforces the importance of both good image acquisition and accurate calibration methods in order to validate this 3D reconstruction technique in metrological terms. Preliminary results did not indicate the existence of a clear relationship between either the true to project area ratio and CVN absorbed energy or the fractal dimension and CVN absorbed energy. It is likely that each fracture mechanism presents a proper relationship between the fractographic parameters and mechanical properties. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Conventional procedures used to assess the integrity of corroded piping systems with axial defects generally employ simplified failure criteria based upon a plastic collapse failure mechanism incorporating the tensile properties of the pipe material. These methods establish acceptance criteria for defects based on limited experimental data for low strength structural steels which do not necessarily address specific requirements for the high grade steels currently used. For these cases, failure assessments may be overly conservative or provide significant scatter in their predictions, which lead to unnecessary repair or replacement of in-service pipelines. Motivated by these observations, this study examines the applicability of a stress-based criterion based upon plastic instability analysis to predict the failure pressure of corroded pipelines with axial defects. A central focus is to gain additional insight into effects of defect geometry and material properties on the attainment of a local limit load to support the development of stress-based burst strength criteria. The work provides an extensive body of results which lend further support to adopt failure criteria for corroded pipelines based upon ligament instability analyses. A verification study conducted on burst testing of large-diameter pipe specimens with different defect length shows the effectiveness of a stress-based criterion using local ligament instability in burst pressure predictions, even though the adopted burst criterion exhibits a potential dependence on defect geometry and possibly on material`s strain hardening capacity. Overall, the results presented here suggests that use of stress-based criteria based upon plastic instability analysis of the defect ligament is a valid engineering tool for integrity assessments of pipelines with axial corroded defects. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the ""Guidelines for Formal Safety Assessment"" of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the ""International Safety Management Code"" are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.
Resumo:
Crude petroleum oils are complex mixtures of different compounds (mainly organic), which are obtained from an extensive range of different geological sources. The fluorescence of crude petroleum oils derives largely from the aromatic hydrocarbon fraction, and this fluorescence emission is strongly influenced by the chemical composition (e.g., fluorophore and quencher concentrations) and physical characteristics (e.g., viscosity and optical density) of the oil. The fluorescence spectroscopy (FS) is increasingly used in petroleum technology due the availability of better optical detection techniques, because FS offers high sensitivity, good diagnostic potential, and relatively simple instrumentation. In this work we analyzed crude petroleum at different dilution in Nujol, a transparent mineral oil. The main objective of this work was to verify the possibility to measure crude oil emission spectroscopic without use of volatile solvents. The mixtures of nujol with different -crude oil concentrations were measured with a 10 mm optical path cuvette thus simplifying the fluorescence spectroscopy signal detection. The emission spectra were obtained by exciting the samples with a 400 W Xenon lamp at 350 nm, 450 nm and 532 nm. The emissions of the samples were collected perpendicularly with the excitation axis.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.