969 resultados para Computational method
Resumo:
The adsorption kinetics of phosphate onto Nb(2)O(5)center dot nH(2)O was investigated at initial phosphate concentrations 10 and 50 mg L(-1). The kinetic process was described by a pseudo second-order rate model very well. The adsorption thermodynamics was carried out at 298, 308, 318, 328 and 338 K. The positive values of both Delta H and Delta S suggest an endothermic reaction and increase in randomness at the solid-liquid interface during the adsorption. Delta G values obtained were negative indicating a spontaneous adsorption process. The Langmuir model described the data better than the Freundlich isotherm model. The peak appearing at 1050 cm(-1) in IR spectra after adsorption was attributed to the bending vibration of adsorbed phosphate. The effective desorption could be achieved using water at pH 12. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Titanium and its alloys have been used in dentistry due to their excellent corrosion resistance and biocompatibility. It was shown that even a pure titanium metal and its alloys spontaneously form a bone-like apatite layer on their surfaces within a living body. The purpose of this work was to evaluate the growth of calcium phosphates at the surface of the experimental alloy Ti-7.5Mo. We produced ingots from pure titanium and molybdenum using an arc-melting furnace We then submitted these Ingots to heat treatment at 1100 degrees C for one hour, cooled the samples in water, and cold-worked the cooled material by swaging and machining. We measured the media roughness (Ra) with a roughness meter (1.3 and 2.6 mu m) and cut discs (13 mm in diameter and 4 mm in thickness) from each sample group. The samples were treated by biomimetic methods for 7 or 14 days to form an apatite coating on the surface. We then characterized the surfaces with an optical profilometer, a scanning electron microscope and contact angle measurements. The results of this study indicate that apatite can form on the surface of a Ti-7.5Mo alloy, and that a more complete apatite layer formed on the Ra = 2 6 mu m material. This Increased apatite formation resulted in a lower contact angle (C) 2010 Elsevier B.V. All rights reserved
Resumo:
A type of Nb(2)O(5)center dot 3H(2)O was synthesized and its phosphate removal potential was investigated in this study. The kinetic study, adsorption isotherm, pH effect, thermodynamic study and desorption were examined in batch experiments. The kinetic process was described by a pseudo-second-order rate model very well. The phosphate adsorption tended to increase with a decrease of pH. The adsorption data fitted well to the Langmuir model with which the maximum P adsorption capacity was estimated to be 18.36 mg-Pg(-1). The peak appearing at 1050 cm(-1) in IR spectra after adsorption was attributed to the bending vibration of adsorbed phosphate. The positive values of both Delta H degrees and Delta S degrees suggest an endothermic reaction and increase in randomness at the solid-liquid interface during the adsorption. Delta G degrees values obtained were negative indicating a spontaneous adsorption process. A phosphate desorbability of approximately 68% was observed with water at pH 12, which indicated a relatively strong bonding between the adsorbed phosphate and the sorptive sites on the surface of the adsorbent. The immobilization of phosphate probably occurs by the mechanisms of ion exchange and physicochemical attraction. Due to its high adsorption capacity, this type of hydrous niobium oxide has the potential for application to control phosphorus pollution.
Resumo:
Optical monitoring systems are necessary to manufacture multilayer thin-film optical filters with low tolerance on spectrum specification. Furthermore, to have better accuracy on the measurement of film thickness, direct monitoring is a must. Direct monitoring implies acquiring spectrum data from the optical component undergoing the film deposition itself, in real time. In making film depositions on surfaces of optical components, the high vacuum evaporator chamber is the most popular equipment. Inside the evaporator, at the top of the chamber, there is a metallic support with several holes where the optical components are assembled. This metallic support has rotary motion to promote film homogenization. To acquire a measurement of the spectrum of the film in deposition, it is necessary to pass a light beam through a glass witness undergoing the film deposition process, and collect a sample of the light beam using a spectrometer. As both the light beam and the light collector are stationary, a synchronization system is required to identify the moment at which the optical component passes through the light beam.
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
This research presents a method for frequency estimation in power systems using an adaptive filter based on the Least Mean Square Algorithm (LMS). In order to analyze a power system, three-phase voltages were converted into a complex signal applying the alpha beta-transform and the results were used in an adaptive filtering algorithm. Although the use of the complex LMS algorithm is described in the literature, this paper deals with some practical aspects of the algorithm implementation. In order to reduce computing time, a coefficient generator was implemented. For the algorithm validation, a computing simulation of a power system was carried Out using the ATP software. Many different situations were Simulated for the performance analysis of the proposed methodology. The results were compared to a commercial relay for validation, showing the advantages of the new method. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a novel graphical approach to adjust and evaluate frequency-based relays employed in anti-islanding protection schemes of distributed synchronous generators, in order to meet the anti-islanding and abnormal frequency variation requirements, simultaneously. The proposed method defines a region in the power mismatch space, inside which the relay non-detection zone should be located, if the above-mentioned requirements must be met. Such region is called power imbalance application region. Results show that this method can help protection engineers to adjust frequency-based relays to improve the anti-islanding capability and to minimize false operation occurrences, keeping the abnormal frequency variation utility requirements satisfied. Moreover, the proposed method can be employed to coordinate different types of frequency-based relays, aiming at improving overall performance of the distributed generator frequency protection scheme. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
The taxonomy of the N(2)-fixing bacteria belonging to the genus Bradyrhizobium is still poorly refined, mainly due to conflicting results obtained by the analysis of the phenotypic and genotypic properties. This paper presents an application of a method aiming at the identification of possible new clusters within a Brazilian collection of 119 Bradryrhizobium strains showing phenotypic characteristics of B. japonicum and B. elkanii. The stability was studied as a function of the number of restriction enzymes used in the RFLP-PCR analysis of three ribosomal regions with three restriction enzymes per region. The method proposed here uses Clustering algorithms with distances calculated by average-linkage clustering. Introducing perturbations using sub-sampling techniques makes the stability analysis. The method showed efficacy in the grouping of the species B. japonicum and B. elkanii. Furthermore, two new clusters were clearly defined, indicating possible new species, and sub-clusters within each detected cluster. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a new approach, predictor-corrector modified barrier approach (PCMBA), to minimize the active losses in power system planning studies. In the PCMBA, the inequality constraints are transformed into equalities by introducing positive auxiliary variables. which are perturbed by the barrier parameter, and treated by the modified barrier method. The first-order necessary conditions of the Lagrangian function are solved by predictor-corrector Newton`s method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, reaching the limits of the inequality constraints. The feasibility of the proposed approach is demonstrated using various IEEE test systems and a realistic power system of 2256-bus corresponding to the Brazilian South-Southeastern interconnected system. The results show that the utilization of the predictor-corrector method with the pure modified barrier approach accelerates the convergence of the problem in terms of the number of iterations and computational time. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The main purpose of this paper is to present architecture of automated system that allows monitoring and tracking in real time (online) the possible occurrence of faults and electromagnetic transients observed in primary power distribution networks. Through the interconnection of this automated system to the utility operation center, it will be possible to provide an efficient tool that will assist in decisionmaking by the Operation Center. In short, the desired purpose aims to have all tools necessary to identify, almost instantaneously, the occurrence of faults and transient disturbances in the primary power distribution system, as well as to determine its respective origin and probable location. The compilations of results from the application of this automated system show that the developed techniques provide accurate results, identifying and locating several occurrences of faults observed in the distribution system.
Resumo:
The nature of the molecular structure of plastics makes the properties of such materials markedly temperature dependent. In addition, the continuous increase in the utilization of polymeric materials in many specific applications has demanded knowledge of their physical properties, both during their processing as raw material, as well as over the working temperature range of the final polymer product. Thermal conductivity, thermal diffusivity and specific heat, namely the thermal properties, are the three most important physical properties of a material that are needed for heat transfer calculations. Recently, among several different methods for the determination of the thermal diffusivity and thermal conductivity, transient techniques have become the preferable way for measuring thermal properties of materials. In this work, a very simple and low cost variation of the well known Angstrom method is employed in the experimental determination of the thermal diffusivity of some selected polymers. Cylindrical shaped samples 3 cm diameter and 7 cm high were prepared by cutting from long cylindrical commercial bars. The reproducibility is very good, and the results obtained were checked against results obtained by the hot wire technique, laser flash technique, and when possible, they were also compared with data found in the literature. Thermal conductivity may be then derived from the thermal diffusivity with the knowledge of the bulk density and the specific heat, easily obtained by differential scanning calorimetry. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with the traditional permutation flow shop scheduling problem with the objective of minimizing mean flowtime, therefore reducing in-process inventory. A new heuristic method is proposed for the scheduling problem solution. The proposed heuristic is compared with the best one considered in the literature. Experimental results show that the new heuristic provides better solutions regarding both the solution quality and computational effort.
Resumo:
This article presents an extensive investigation carried out in two technology-based companies of the So Carlos technological pole in Brazil. Based on this multiple case study and literature review, a method, entitled hereafter IVPM2, applying agile project management (APM) principles was developed. After the method implementation, a qualitative evaluation was carried out by a document analysis and questionnaire application. This article shows that the application of this method at the companies under investigation evidenced the benefits of using simple, iterative, visual, and agile techniques to plan and control innovative product projects combined with traditional project management best practices, such as standardization.
Resumo:
In this paper, the method of Galerkin and the Askey-Wiener scheme are used to obtain approximate solutions to the stochastic displacement response of Kirchhoff plates with uncertain parameters. Theoretical and numerical results are presented. The Lax-Milgram lemma is used to express the conditions for existence and uniqueness of the solution. Uncertainties in plate and foundation stiffness are modeled by respecting these conditions, hence using Legendre polynomials indexed in uniform random variables. The space of approximate solutions is built using results of density between the space of continuous functions and Sobolev spaces. Approximate Galerkin solutions are compared with results of Monte Carlo simulation, in terms of first and second order moments and in terms of histograms of the displacement response. Numerical results for two example problems show very fast convergence to the exact solution, at excellent accuracies. The Askey-Wiener Galerkin scheme developed herein is able to reproduce the histogram of the displacement response. The scheme is shown to be a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2009 Elsevier Ltd. All rights reserved.