926 resultados para MODELING APPROACH
Resumo:
An implementation of a computational tool to generate new summaries from new source texts is presented, by means of the connectionist approach (artificial neural networks). Among other contributions that this work intends to bring to natural language processing research, the use of a more biologically plausible connectionist architecture and training for automatic summarization is emphasized. The choice relies on the expectation that it may bring an increase in computational efficiency when compared to the sa-called biologically implausible algorithms.
Resumo:
The conditions for maximization of the enzymatic activity of lipase entrapped in sol-gel matrix were determined for different vegetable oils using an experimental design. The effects of pH, temperature, and biocatalyst loading on lipase activity were verified using a central composite experimental design leading to a set of 13 assays and the surface response analysis. For canola oil and entrapped lipase, statistical analyses showed significant effects for pH and temperature and also the interactions between pH and temperature and temperature and biocatalyst loading. For the olive oil and entrapped lipase, it was verified that the pH was the only variable statistically significant. This study demonstrated that response surface analysis is a methodology appropriate for the maximization of the percentage of hydrolysis, as a function of pH, temperature, and lipase loading.
Resumo:
BACKGROUND: The combined effects of vanillin and syringaldehyde on xylitol production by Candida guilliermondii using response surface methodology (RSM) have been studied. A 2(2) full-factorial central composite design was employed for experimental design and analysis of the results. RESULTS: Maximum xylitol productivities (Q(p) = 0.74 g L(-1) h(-1)) and yields (Y(P/S) = 0.81 g g(-1)) can be attained by adding only vanillin at 2.0 g L(-1) to the fermentation medium. These data were closely correlated with the experimental results obtained (0.69 +/- 0.04 g L(-1) h(-1) and 0.77 +/- 0.01 g g(-1)) indicating a good agreement with the predicted value. C. guilliermondii was able to convert vanillin completely after 24 h of fermentation with 94% yield of vanillyl alcohol. CONCLUSIONS: The bioconversion of xylose into xylitol by C. guilliermondii is strongly dependent on the combination of aldehydes and phenolics in the fermentation medium. Vanillin is a source of phenolic compound able to improve xylitol production by yeast. The conversion of vanillin to alcohol vanilyl reveals the potential of this yeast for medium detoxification. (C) 2009 Society of Chemical Industry
Resumo:
Two screenings of commercial lipases were performed to find a lipase with superior performance for the integrated production of biodiesel and monoglycerides. The first screening was carried out under alcoholysis conditions using ethanol as acyl acceptor to convert triglycerides to their corresponding ethyl esters (biodiesel). The second screening was performed under glycerolysis conditions to yield monoglycerides (MG). All lipases were immobilized on silica-PVA composite by covalent immobilization. The assays were performed using babassu oil and alcohols (ethanol or glycerol) in solvent free systems. For both substrates, lipase from Burkholderia cepacia (lipase PS) was found to be the most suitable enzyme to attain satisfactory yields. To further improve the process, the Response Surface Methodology (RSM) was used to determine the optima operating conditions for each biotransformation. For biodiesel production, the highest transesterification yield (>98%) was achieved within 48 h reaction at 39 degrees C using an oil-to-ethanol molar ratio of 1:7. For MG production, optima conditions corresponded to oil-to-glycerol molar ratio of 1: 15 at 55 degrees C, yielding 25 wt.% MG in 6 h reaction. These results show the potential of B. cepacia lipase to catalyze both reactions and the feasibility to consider an integrated approach for biodiesel and MG production. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
We assess the performance of three unconditionally stable finite-difference time-domain (FDTD) methods for the modeling of doubly dispersive metamaterials: 1) locally one-dimensional FDTD; 2) locally one-dimensional FDTD with Strang splitting; and (3) alternating direction implicit FDTD. We use both double-negative media and zero-index media as benchmarks.
Resumo:
This paper presents a new approach, predictor-corrector modified barrier approach (PCMBA), to minimize the active losses in power system planning studies. In the PCMBA, the inequality constraints are transformed into equalities by introducing positive auxiliary variables. which are perturbed by the barrier parameter, and treated by the modified barrier method. The first-order necessary conditions of the Lagrangian function are solved by predictor-corrector Newton`s method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, reaching the limits of the inequality constraints. The feasibility of the proposed approach is demonstrated using various IEEE test systems and a realistic power system of 2256-bus corresponding to the Brazilian South-Southeastern interconnected system. The results show that the utilization of the predictor-corrector method with the pure modified barrier approach accelerates the convergence of the problem in terms of the number of iterations and computational time. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The crossflow filtration process differs of the conventional filtration by presenting the circulation flow tangentially to the filtration surface. The conventional mathematical models used to represent the process have some limitations in relation to the identification and generalization of the system behaviour. In this paper, a system based on artificial neural networks is developed to overcome the problems usually found in the conventional mathematical models. More specifically, the developed system uses an artificial neural network that simulates the behaviour of the crossflow filtration process in a robust way. Imprecisions and uncertainties associated with the measurements made on the system are automatically incorporated in the neural approach. Simulation results are presented to justify the validity of the proposed approach. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The objective of this work is to present the finite element modeling of laminate composite plates with embedded piezoelectric patches or layers that are then connected to active-passive resonant shunt circuits, composed of resistance, inductance and voltage source. Applications to passive vibration control and active control authority enhancement are also presented and discussed. The finite element model is based on an equivalent single layer theory combined with a third-order shear deformation theory. A stress-voltage electromechanical model is considered for the piezoelectric materials fully coupled to the electrical circuits. To this end, the electrical circuit equations are also included in the variational formulation. Hence, conservation of charge and full electromechanical coupling are guaranteed. The formulation results in a coupled finite element model with mechanical (displacements) and electrical (charges at electrodes) degrees of freedom. For a Graphite-Epoxy (Carbon-Fibre Reinforced) laminate composite plate, a parametric analysis is performed to evaluate optimal locations along the plate plane (xy) and thickness (z) that maximize the effective modal electromechanical coupling coefficient. Then, the passive vibration control performance is evaluated for a network of optimally located shunted piezoelectric patches embedded in the plate, through the design of resistance and inductance values of each circuit, to reduce the vibration amplitude of the first four vibration modes. A vibration amplitude reduction of at least 10 dB for all vibration modes was observed. Then, an analysis of the control authority enhancement due to the resonant shunt circuit, when the piezoelectric patches are used as actuators, is performed. It is shown that the control authority can indeed be improved near a selected resonance even with multiple pairs of piezoelectric patches and active-passive circuits acting simultaneously. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The confined flows in tubes with permeable surfaces arc associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration, polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
An accurate estimate of machining time is very important for predicting delivery time, manufacturing costs, and also to help production process planning. Most commercial CAM software systems estimate the machining time in milling operations simply by dividing the entire tool path length by the programmed feed rate. This time estimate differs drastically from the real process time because the feed rate is not always constant due to machine and computer numerical controlled (CNC) limitations. This study presents a practical mechanistic method for milling time estimation when machining free-form geometries. The method considers a variable called machine response time (MRT) which characterizes the real CNC machine`s capacity to move in high feed rates in free-form geometries. MRT is a global performance feature which can be obtained for any type of CNC machine configuration by carrying out a simple test. For validating the methodology, a workpiece was used to generate NC programs for five different types of CNC machines. A practical industrial case study was also carried out to validate the method. The results indicated that MRT, and consequently, the real machining time, depends on the CNC machine`s potential: furthermore, the greater MRT, the larger the difference between predicted milling time and real milling time. The proposed method achieved an error range from 0.3% to 12% of the real machining time, whereas the CAM estimation achieved from 211% to 1244% error. The MRT-based process is also suggested as an instrument for helping in machine tool benchmarking.
Resumo:
The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.