993 resultados para Simultaneous Methods
Resumo:
Molybdenum and tungsten bimetallic oxides were synthetized according to the following methods: Pechini, coprecipitation and solid state reaction (SSR). After the characterization, those solids were carbureted at programmed temperature. The carburation process was monitored by checking the consumption of carburant hydrocarbon and CO produced. The monitoring process permits to avoid or to diminish the formation of pirolytic carbon.
Resumo:
Response surface methodology was used to evaluate optimal time, temperature and oxalic acid concentration for simultaneous saccharification and fermentation (SSF) of corncob particles by Pichia stipitis CBS 6054. Fifteen different conditions for pretreatment were examined in a 2(3) full factorial design with six axial points. Temperatures ranged from 132 to 180 degrees C, time from 10 to 90 min and oxalic acid loadings from 0.01 to 0.038 g/g solids. Separate maxima were found for enzymatic saccharification and hemicellulose fermentation, respectively, with the condition for maximum saccharification being significantly more severe. Ethanol production was affected by reaction temperature more than by oxalic acid and reaction time over the ranges examined. The effect of reaction temperature was significant at a 95% confidence level in its effect on ethanol production. Oxalic acid and reaction time were statistically significant at the 90% level. The highest ethanol concentration (20 g/l) was obtained after 48 h with an ethanol volumetric production rate of 0.42 g ethanol l(-1) h(-1). The ethanol yield after SSF with P. stipitis was significantly higher than predicted by sequential saccharification and fermentation of substrate pretreated under the same condition. This was attributed to the secretion of beta-glucosidase by P. stipitis. During SSF, free extracellular beta-glucosidase activity was 1.30 pNPG U/g with P. stipitis, while saccharification without the yeast was 0.66 pNPG U/g. Published by Elsevier Ltd.
Resumo:
Understanding the product`s `end-of-life` is important to reduce the environmental impact of the products` final disposal. When the initial stages of product development consider end-of-life aspects, which can be established by ecodesign (a proactive approach of environmental management that aims to reduce the total environmental impact of products), it becomes easier to close the loop of materials. The `end-of-life` ecodesign methods generally include more than one `end-of-life` strategy. Since product complexity varies substantially, some components, systems or sub-systems are easier to be recycled, reused or remanufactured than others. Remanufacture is an effective way to maintain products in a closed-loop, reducing both environmental impacts and costs of the manufacturing processes. This paper presents some ecodesign methods focused on the integration of different `end-of-life` strategies, with special attention to remanufacturing, given its increasing importance in the international scenario to reduce the life cycle impacts of products. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We assess the performance of three unconditionally stable finite-difference time-domain (FDTD) methods for the modeling of doubly dispersive metamaterials: 1) locally one-dimensional FDTD; 2) locally one-dimensional FDTD with Strang splitting; and (3) alternating direction implicit FDTD. We use both double-negative media and zero-index media as benchmarks.
Resumo:
The airflow velocities and pressures are calculated from a three-dimensional model of the human larynx by using the finite element method. The laryngeal airflow is assumed to be incompressible, isothermal, steady, and created by fixed pressure drops. The influence of different laryngeal profiles (convergent, parallel, and divergent), glottal area, and dimensions of false vocal folds in the airflow are investigated. The results indicate that vertical and horizontal phase differences in the laryngeal tissue movements are influenced by the nonlinear pressure distribution across the glottal channel, and the glottal entrance shape influences the air pressure distribution inside the glottis. Additionally, the false vocal folds increase the glottal duct pressure drop by creating a new constricted channel in the larynx, and alter the airflow vortexes formed after the true vocal folds. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
On-line leak detection is a main concern for the safe operation of pipelines. Acoustic and mass balance are the most important and extensively applied technologies in field problems. The objective of this work is to compare these leak detection methods with respect to a given reference situation, i.e., the same pipeline and monitoring signals acquired at the inlet and outlet ends. Experimental tests were conducted in a 749 m long laboratory pipeline transporting water as the working fluid. The instrumentation included pressure transducers and electromagnetic flowmeters. Leaks were simulated by opening solenoid valves placed at known positions and previously calibrated to produce known average leak flow rates. Results have clearly shown the limitations and advantages of each method. It is also quite clear that acoustics and mass balance technologies are, in fact, complementary. In general, an acoustic leak detection system sends out an alarm more rapidly and locates the leak more precisely, provided that the rupture of the pipeline occurs abruptly enough. On the other hand, a mass balance leak detection method is capable of quantifying the leak flow rate very accurately and of detecting progressive leaks.
Resumo:
Neodymium doped and undoped aluminum oxide samples were obtained using two different techniques: Pechini and sol-gel. Fine grained powders were produced using both procedures, which were analyzed using Scanning Electron Microscopy (SEM) and Thermo-Stimulated Luminescence (TSL). Results showed that neodymium ions incorporation is responsible for the creation of two new TSL peaks (125 and 265 degrees C) and, also, for the enhancement of the intrinsic TSL peak at 190 degrees C. An explanation was proposed for these observations. SEM gave the dimensions of the clusters produced by each method, showing that those obtained by Pechini are smaller than the ones produced by sol-gel; it can also explain the higher emission supplied by the first one. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A polyurethane packed-bed-biofilm sequential batch reactor was fed with synthetic substrate simulating the composition of UASB reactor effluents. Two distinct ammonia nitrogen concentrations (125 and 250 mg l(-1)) were supplied during two sequential long-term experiments of 160 days each (320 total). Cycles of 24 h under intermittent aeration for periods of 1 h were applied, and ethanol was added as a carbon source at the beginning of each anoxic period. Nitrite was the main oxidized nitrogen compound which accumulated only during the aerated phases of the batch cycle. A consistent decrease of nitrite concentration started always immediately after the interruption of oxygen supply and addition of the electron donor. Removal to below detection limits of all nitrogen soluble forms was always observed at the end of the 24 h cycles for both initial concentrations. Polyurethane packed-bed matrices and ethanol amendments conferred high process stability. Microbial investigation by cloning suggested that nitrification was carried out by Nitrosomonas-like species whereas denitrification was mediated by unclassified species commonly observed in denitrifying environments. The packed-bed batch bioreactor favored the simultaneous colonization of distinct microbial groups within the immobilized microbial biomass. The biofilm was capable of actively oxidizing ammonium and denitrification at high ratios in intermittent intervals within 24 h cycles. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with the use of simplified methods to predict methane generation in tropical landfills. Methane recovery data obtained on site as part of a research program being carried Out at the Metropolitan Landfill, Salvador, Brazil, is analyzed and used to obtain field methane generation over time. Laboratory data from MSW samples of different ages are presented and discussed: and simplified procedures to estimate the methane generation potential, L(o), and the constant related to the biodegradation rate, k are applied. The first order decay method is used to fit field and laboratory results. It is demonstrated that despite the assumptions and the simplicity of the adopted laboratory procedures, the values L(o) and k obtained are very close to those measured in the field, thus making this kind of analysis very attractive for first approach purposes. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
An updated flow pattern map was developed for CO2 on the basis of the previous Cheng-Ribatski-Wojtan-Thome CO2 flow pattern map [1,2] to extend the flow pattern map to a wider range of conditions. A new annular flow to dryout transition (A-D) and a new dryout to mist flow transition (D-M) were proposed here. In addition, a bubbly flow region which generally occurs at high mass velocities and low vapor qualities was added to the updated flow pattern map. The updated flow pattern map is applicable to a much wider range of conditions: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to +25 degrees C (reduced pressures from 0.21 to 0.87). The updated flow pattern map was compared to independent experimental data of flow patterns for CO2 in the literature and it predicts the flow patterns well. Then, a database of CO2 two-phase flow pressure drop results from the literature was set up and the database was compared to the leading empirical pressure drop models: the correlations by Chisholm [3], Friedel [4], Gronnerud [5] and Muller-Steinhagen and Heck [6], a modified Chisholm correlation by Yoon et al. [7] and the flow pattern based model of Moreno Quiben and Thome [8-10]. None of these models was able to predict the CO2 pressure drop data well. Therefore, a new flow pattern based phenomenological model of two-phase flow frictional pressure drop for CO2 was developed by modifying the model of Moreno Quiben and Thome using the updated flow pattern map in this study and it predicts the CO2 pressure drop database quite well overall. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Corresponding to the updated flow pattern map presented in Part I of this study, an updated general flow pattern based flow boiling heat transfer model was developed for CO2 using the Cheng-Ribatski-Wojtan-Thome [L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside horizontal tubes, Int. J. Heat Mass Transfer 49 (2006) 4082-4094; L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, Erratum to: ""New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside tubes"" [Heat Mass Transfer 49 (21-22) (2006) 4082-4094], Int. J. Heat Mass Transfer 50 (2007) 391] flow boiling heat transfer model as the starting basis. The flow boiling heat transfer correlation in the dryout region was updated. In addition, a new mist flow heat transfer correlation for CO2 was developed based on the CO2 data and a heat transfer method for bubbly flow was proposed for completeness sake. The updated general flow boiling heat transfer model for CO2 covers all flow regimes and is applicable to a wider range of conditions for horizontal tubes: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to 25 degrees C (reduced pressures from 0.21 to 0.87). The updated general flow boiling heat transfer model was compared to a new experimental database which contains 1124 data points (790 more than that in the previous model [Cheng et al., 2006, 2007]) in this study. Good agreement between the predicted and experimental data was found in general with 71.4% of the entire database and 83.2% of the database without the dryout and mist flow data predicted within +/-30%. However, the predictions for the dryout and mist flow regions were less satisfactory due to the limited number of data points, the higher inaccuracy in such data, scatter in some data sets ranging up to 40%, significant discrepancies from one experimental study to another and the difficulties associated with predicting the inception and completion of dryout around the perimeter of the horizontal tubes. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The magnetic Barkhausen noise (MBN) is a phenomenon sensitive to several kinds of magnetic material microstructure changes, as well as to variations in material plastic deformation and stress. This fact stimulates the development of MBN-based non-destructive testing (NDT) techniques for analyzing magnetic materials, being the proposition of such a method, the main objective of the present study. The behavior of the MBN signal envelope, under simultaneous variations of carbon content and plastic deformation, is explained by the domain wall dynamics. Additionally, a non-destructive parameter for the characterization of each of these factors is proposed and validated through the experimental results. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, processing methods of Fourier optics implemented in a digital holographic microscopy system are presented. The proposed methodology is based on the possibility of the digital holography in carrying out the whole reconstruction of the recorded wave front and consequently, the determination of the phase and intensity distribution in any arbitrary plane located between the object and the recording plane. In this way, in digital holographic microscopy the field produced by the objective lens can be reconstructed along its propagation, allowing the reconstruction of the back focal plane of the lens, so that the complex amplitudes of the Fraunhofer diffraction, or equivalently the Fourier transform, of the light distribution across the object can be known. The manipulation of Fourier transform plane makes possible the design of digital methods of optical processing and image analysis. The proposed method has a great practical utility and represents a powerful tool in image analysis and data processing. The theoretical aspects of the method are presented, and its validity has been demonstrated using computer generated holograms and images simulations of microscopic objects. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.