5 resultados para Optimization parameters
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
The final contents of total and individual trans-fatty acids of sunflower oil, produced during the deacidification step of physical refining were obtained using a computational simulation program that considered cis-trans isomerization reaction features for oleic, linoleic, and linolenic acids attached to the glycerol part of triacylglycerols. The impact of process variables, such as temperature and liquid flow rate, and of equipment configuration parameters, such as liquid height, diameter, and number of stages, that influence the retention time of the oil in the equipment was analyzed using the response-surface methodology (RSM). The computational simulation and the RSM results were used in two different optimization methods, aiming to minimize final levels of total and individual trans-fatty acids (trans-FA), while keeping neutral oil loss and final oil acidity at low values. The main goal of this work was to indicate that computational simulation, based on a careful modeling of the reaction system, combined with optimization could be an important tool for indicating better processing conditions in industrial physical refining plants of vegetable oils, concerning trans-FA formation.
Resumo:
Optimization of photo-Fenton degradation of copper phthalocyanine blue was achieved by response surface methodology (RSM) constructed with the aid of a sequential injection analysis (SIA) system coupled to a homemade photo-reactor. Highest degradation percentage was obtained at the following conditions [H(2)O(2)]/[phthalocyanine] = 7, [H(2)O(2)]/[FeSO(4)] = 10, pH = 2.5, and stopped flow time in the photo reactor = 30 s. The SIA system was designed to prepare a monosegment containing the reagents and sample, to pump it toward the photo-reactor for the specified time and send the products to a flow-through spectrophotometer for monitoring the color reduction of the dye. Changes in parameters such as reagent molar ratios. residence time and pH were made by modifications in the software commanding the SI system, without the need for physical reconfiguration of reagents around the selection valve. The proposed procedure and system fed the statistical program with degradation data for fast construction of response surface plots. After optimization, 97% of the dye was degraded. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This study aimed to optimize the rheological properties of probiotic yoghurts supplemented with skimmed milk powder (SMP) whey protein concentrate (WPC) and sodium caseinate (Na-Cn) by using an experimental design type simplex-centroid for mixture modeling It Included seven batches/trials three were supplemented with each type of the dairy protein used three corresponding to the binary mixtures and one to the ternary one in order to increase protein concentration in 1 g 100 g(-1) of final product A control experiment was prepared without supplementing the milk base Processed milk bases were fermented at 42 C until pH 4 5 by using a starter culture blend that consisted of Streptococcus thermophilus Lactobacillus delbrueckii subsp bulgaricus and Bifidobacterium (Humans subsp lactis The kinetics of acidification was followed during the fermentation period as well the physico-chemical analyses enumeration of viable bacteria and theological characteristics of the yoghurts Models were adjusted to the results (kinetic responses counts of viable bacteria and theological parameters) through three regression models (linear quadratic and cubic special) applied to mixtures The results showed that the addition of milk proteins affected slightly acidification profile and counts of S thermophilus and B animal`s subsp lactis but it was significant for L delbrueckii subsp bulgaricus Partially-replacing SMP (45 g/100 g) with WPC or Na-Cn simultaneously enhanced the theological properties of probiotic yoghurts taking into account the kinetics of acidification and enumeration of viable bacteria (C) 2010 Elsevier Ltd All rights reserved