992 resultados para Gradient methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

On-line leak detection is a main concern for the safe operation of pipelines. Acoustic and mass balance are the most important and extensively applied technologies in field problems. The objective of this work is to compare these leak detection methods with respect to a given reference situation, i.e., the same pipeline and monitoring signals acquired at the inlet and outlet ends. Experimental tests were conducted in a 749 m long laboratory pipeline transporting water as the working fluid. The instrumentation included pressure transducers and electromagnetic flowmeters. Leaks were simulated by opening solenoid valves placed at known positions and previously calibrated to produce known average leak flow rates. Results have clearly shown the limitations and advantages of each method. It is also quite clear that acoustics and mass balance technologies are, in fact, complementary. In general, an acoustic leak detection system sends out an alarm more rapidly and locates the leak more precisely, provided that the rupture of the pipeline occurs abruptly enough. On the other hand, a mass balance leak detection method is capable of quantifying the leak flow rate very accurately and of detecting progressive leaks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Generalized Finite Element Method (GFEM) is employed in this paper for the numerical analysis of three-dimensional solids tinder nonlinear behavior. A brief summary of the GFEM as well as a description of the formulation of the hexahedral element based oil the proposed enrichment strategy are initially presented. Next, in order to introduce the nonlinear analysis of solids, two constitutive models are briefly reviewed: Lemaitre`s model, in which damage and plasticity are coupled, and Mazars`s damage model suitable for concrete tinder increased loading. Both models are employed in the framework of a nonlocal approach to ensure solution objectivity. In the numerical analyses carried out, a selective enrichment of approximation at regions of concern in the domain (mainly those with high strain and damage gradients) is exploited. Such a possibility makes the three-dimensional analysis less expensive and practicable since re-meshing resources, characteristic of h-adaptivity, can be minimized. Moreover, a combination of three-dimensional analysis and the selective enrichment presents a valuable good tool for a better description of both damage and plastic strain scatterings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neodymium doped and undoped aluminum oxide samples were obtained using two different techniques: Pechini and sol-gel. Fine grained powders were produced using both procedures, which were analyzed using Scanning Electron Microscopy (SEM) and Thermo-Stimulated Luminescence (TSL). Results showed that neodymium ions incorporation is responsible for the creation of two new TSL peaks (125 and 265 degrees C) and, also, for the enhancement of the intrinsic TSL peak at 190 degrees C. An explanation was proposed for these observations. SEM gave the dimensions of the clusters produced by each method, showing that those obtained by Pechini are smaller than the ones produced by sol-gel; it can also explain the higher emission supplied by the first one. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the use of simplified methods to predict methane generation in tropical landfills. Methane recovery data obtained on site as part of a research program being carried Out at the Metropolitan Landfill, Salvador, Brazil, is analyzed and used to obtain field methane generation over time. Laboratory data from MSW samples of different ages are presented and discussed: and simplified procedures to estimate the methane generation potential, L(o), and the constant related to the biodegradation rate, k are applied. The first order decay method is used to fit field and laboratory results. It is demonstrated that despite the assumptions and the simplicity of the adopted laboratory procedures, the values L(o) and k obtained are very close to those measured in the field, thus making this kind of analysis very attractive for first approach purposes. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An updated flow pattern map was developed for CO2 on the basis of the previous Cheng-Ribatski-Wojtan-Thome CO2 flow pattern map [1,2] to extend the flow pattern map to a wider range of conditions. A new annular flow to dryout transition (A-D) and a new dryout to mist flow transition (D-M) were proposed here. In addition, a bubbly flow region which generally occurs at high mass velocities and low vapor qualities was added to the updated flow pattern map. The updated flow pattern map is applicable to a much wider range of conditions: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to +25 degrees C (reduced pressures from 0.21 to 0.87). The updated flow pattern map was compared to independent experimental data of flow patterns for CO2 in the literature and it predicts the flow patterns well. Then, a database of CO2 two-phase flow pressure drop results from the literature was set up and the database was compared to the leading empirical pressure drop models: the correlations by Chisholm [3], Friedel [4], Gronnerud [5] and Muller-Steinhagen and Heck [6], a modified Chisholm correlation by Yoon et al. [7] and the flow pattern based model of Moreno Quiben and Thome [8-10]. None of these models was able to predict the CO2 pressure drop data well. Therefore, a new flow pattern based phenomenological model of two-phase flow frictional pressure drop for CO2 was developed by modifying the model of Moreno Quiben and Thome using the updated flow pattern map in this study and it predicts the CO2 pressure drop database quite well overall. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Corresponding to the updated flow pattern map presented in Part I of this study, an updated general flow pattern based flow boiling heat transfer model was developed for CO2 using the Cheng-Ribatski-Wojtan-Thome [L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside horizontal tubes, Int. J. Heat Mass Transfer 49 (2006) 4082-4094; L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, Erratum to: ""New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside tubes"" [Heat Mass Transfer 49 (21-22) (2006) 4082-4094], Int. J. Heat Mass Transfer 50 (2007) 391] flow boiling heat transfer model as the starting basis. The flow boiling heat transfer correlation in the dryout region was updated. In addition, a new mist flow heat transfer correlation for CO2 was developed based on the CO2 data and a heat transfer method for bubbly flow was proposed for completeness sake. The updated general flow boiling heat transfer model for CO2 covers all flow regimes and is applicable to a wider range of conditions for horizontal tubes: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to 25 degrees C (reduced pressures from 0.21 to 0.87). The updated general flow boiling heat transfer model was compared to a new experimental database which contains 1124 data points (790 more than that in the previous model [Cheng et al., 2006, 2007]) in this study. Good agreement between the predicted and experimental data was found in general with 71.4% of the entire database and 83.2% of the database without the dryout and mist flow data predicted within +/-30%. However, the predictions for the dryout and mist flow regions were less satisfactory due to the limited number of data points, the higher inaccuracy in such data, scatter in some data sets ranging up to 40%, significant discrepancies from one experimental study to another and the difficulties associated with predicting the inception and completion of dryout around the perimeter of the horizontal tubes. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, processing methods of Fourier optics implemented in a digital holographic microscopy system are presented. The proposed methodology is based on the possibility of the digital holography in carrying out the whole reconstruction of the recorded wave front and consequently, the determination of the phase and intensity distribution in any arbitrary plane located between the object and the recording plane. In this way, in digital holographic microscopy the field produced by the objective lens can be reconstructed along its propagation, allowing the reconstruction of the back focal plane of the lens, so that the complex amplitudes of the Fraunhofer diffraction, or equivalently the Fourier transform, of the light distribution across the object can be known. The manipulation of Fourier transform plane makes possible the design of digital methods of optical processing and image analysis. The proposed method has a great practical utility and represents a powerful tool in image analysis and data processing. The theoretical aspects of the method are presented, and its validity has been demonstrated using computer generated holograms and images simulations of microscopic objects. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Causal inference methods - mainly path analysis and structural equation modeling - offer plant physiologists information about cause-and-effect relationships among plant traits. Recently, an unusual approach to causal inference through stepwise variable selection has been proposed and used in various works on plant physiology. The approach should not be considered correct from a biological point of view. Here, it is explained why stepwise variable selection should not be used for causal inference, and shown what strange conclusions can be drawn based upon the former analysis when one aims to interpret cause-and-effect relationships among plant traits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of thermal treatment on phenolic compounds and type 2 diabetes functionality linked to alpha-glucosidase and alpha-amylase inhibition and hypertension relevant angiotensin I-converting enzyme (ACE) inhibition were investigated in selected bean (Phaseolus vulgaris L,) cultivars from Peru and Brazil using in vitro models. Thermal processing by autoclaving decreased the total phenolic content in all cultivars, whereas the 1,1-diphenyl-2-picrylhydrazyl radical scavenging activity-linked antioxidant activity increased among Peruvian cultivars, alpha-Amylase and alpha-glucosidase inhibitory activities were reduced significantly after heat treatment (73-94% and 8-52%, respectively), whereas ACE inhibitory activity was enhanced (9-15%). Specific phenolic acids such as chlorogenic and caffeic acid increased moderately following thermal treatment (2-16% and 5-35%, respectively). No correlation was found between phenolic contents and functionality associated to antidiabetes and antihypertension potential, indicating that non phenolic compounds may be involved. Thermally processed bean cultivars are interesting sources of phenolic acids linked to high antioxidant activity and show potential for hypertension prevention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The antioxidant capacity of the striped sunflower seed cotyledon extracts, obtained by sequential extraction with different polarities of solvents, was evaluated by three different in vitro methods: ferric reducing/antioxidant power (FRAP), 2.2-diphenyl-1-picrylhydrazyl radical (DPPH) and oxygen radical absorbance capacity (ORAC) assays. In the three methods, the aqueous extract at 30 mu g/ml showed a higher antioxidant capacity value (FRAP, 45.27 mu mol; DPPH, 50.18%; ORAC, 1.5 Trolox equivalents) than the ethanolic extract (FRAP, 32.17 mu mol; DPPH, 15.21%; ORAC, 0.50 Trolox equivalents). When compared with the synthetic antioxidant butylated hydroxyl toluene, the antioxidant capacity of the aqueous extract varied from 45% to 66%, according to the used method. The high antioxidant capacity observed for the aqueous extract of the studied sunflower seed suggests that the intake of this seed may prevent in vivo oxidative reactions responsible for the development of several diseases, such as cancer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anatoxin-a(s) is a potent irreversible inhibitor of the enzyme acetylcholinesterase with a unique N-hydroxyguanidine methylphosphate ester chemical structure. Determination of this toxin in environmental samples is hampered by the lack of specific methods for its detection. Using the toxic strain of Anabaena lemmermani PH-160 B as positive control, the fragmentation characteristics of anatoxin-a(s) under collision-induced dissociation conditions have been investigated and new LC-MS/MS methods proposed. Recommended ion transitions for correct detection of this toxin are 253 > 58, 253 > 159, 235 > 98 and 235 > 96. Chromatographic separation is better achieved under HILIC conditions employing a ZIC-HILIC column. This method was used to confirm for the first time the production of anatoxin-a(s) by strains of Anabaena oumiana ITEP-025 and ITEP-026. Considering no standard solutions are commercially available, our results will be of significant use for the correct identification of this toxin by LC-MS/MS. (C) 2009 Elsevier Ltd. All rights reserved.