242 resultados para MONTE CARLOS METHOD


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The power transformer is a piece of electrical equipment that needs continuous monitoring and fast protection since it is very expensive and an essential element for a power system to perform effectively. The most common protection technique used is the percentage differential logic, which provides discrimination between an internal fault and different operating conditions. Unfortunately, there are some operating conditions of power transformers that can affect the protection behavior and the power system stability. This paper proposes the development of a new algorithm to improve the differential protection performance by using fuzzy logic and Clarke`s transform. An electrical power system was modeled using Alternative Transients Program (ATP) software to obtain the operational conditions and fault situations needed to test the algorithm developed. The results were compared to a commercial relay for validation, showing the advantages of the new method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research presents a method for frequency estimation in power systems using an adaptive filter based on the Least Mean Square Algorithm (LMS). In order to analyze a power system, three-phase voltages were converted into a complex signal applying the alpha beta-transform and the results were used in an adaptive filtering algorithm. Although the use of the complex LMS algorithm is described in the literature, this paper deals with some practical aspects of the algorithm implementation. In order to reduce computing time, a coefficient generator was implemented. For the algorithm validation, a computing simulation of a power system was carried Out using the ATP software. Many different situations were Simulated for the performance analysis of the proposed methodology. The results were compared to a commercial relay for validation, showing the advantages of the new method. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel graphical approach to adjust and evaluate frequency-based relays employed in anti-islanding protection schemes of distributed synchronous generators, in order to meet the anti-islanding and abnormal frequency variation requirements, simultaneously. The proposed method defines a region in the power mismatch space, inside which the relay non-detection zone should be located, if the above-mentioned requirements must be met. Such region is called power imbalance application region. Results show that this method can help protection engineers to adjust frequency-based relays to improve the anti-islanding capability and to minimize false operation occurrences, keeping the abnormal frequency variation utility requirements satisfied. Moreover, the proposed method can be employed to coordinate different types of frequency-based relays, aiming at improving overall performance of the distributed generator frequency protection scheme. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A phase-only encryption/decryption scheme with the readout based on the zeroth-order phase-contrast technique (ZOPCT), without the use of a phase-changing plate on the Fourier plane of an optical system based on the 4f optical correlator, is proposed. The encryption of a gray-level image is achieved by multiplying the phase distribution obtained directly from the gray-level image by a random phase distribution. The robustness of the encoding is assured by the nonlinearity intrinsic to the proposed phase-contrast method and the random phase distribution used in the encryption process. The experimental system has been implemented with liquid-crystal spatial modulators to generate phase-encrypted masks and a decrypting key. The advantage of this method is the easy scheme to recover the gray-level information from the decrypted phase-only mask applying the ZOPCT. An analysis of this decryption method was performed against brute force attacks. (C) 2009 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.3223629]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The taxonomy of the N(2)-fixing bacteria belonging to the genus Bradyrhizobium is still poorly refined, mainly due to conflicting results obtained by the analysis of the phenotypic and genotypic properties. This paper presents an application of a method aiming at the identification of possible new clusters within a Brazilian collection of 119 Bradryrhizobium strains showing phenotypic characteristics of B. japonicum and B. elkanii. The stability was studied as a function of the number of restriction enzymes used in the RFLP-PCR analysis of three ribosomal regions with three restriction enzymes per region. The method proposed here uses Clustering algorithms with distances calculated by average-linkage clustering. Introducing perturbations using sub-sampling techniques makes the stability analysis. The method showed efficacy in the grouping of the species B. japonicum and B. elkanii. Furthermore, two new clusters were clearly defined, indicating possible new species, and sub-clusters within each detected cluster. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oropharyngeal dysphagia is characterized by any alteration in swallowing dynamics which may lead to malnutrition and aspiration pneumonia. Early diagnosis is crucial for the prognosis of patients with dysphagia, and the best method for swallowing dynamics assessment is swallowing videofluoroscopy, an exam performed with X-rays. Because it exposes patients to radiation, videofluoroscopy should not be performed frequently nor should it be prolonged. This study presents a non-invasive method for the pre-diagnosis of dysphagia based on the analysis of the swallowing acoustics, where the discrete wavelet transform plays an important role to increase sensitivity and specificity in the identification of dysphagic patients. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The airflow velocities and pressures are calculated from a three-dimensional model of the human larynx by using the finite element method. The laryngeal airflow is assumed to be incompressible, isothermal, steady, and created by fixed pressure drops. The influence of different laryngeal profiles (convergent, parallel, and divergent), glottal area, and dimensions of false vocal folds in the airflow are investigated. The results indicate that vertical and horizontal phase differences in the laryngeal tissue movements are influenced by the nonlinear pressure distribution across the glottal channel, and the glottal entrance shape influences the air pressure distribution inside the glottis. Additionally, the false vocal folds increase the glottal duct pressure drop by creating a new constricted channel in the larynx, and alter the airflow vortexes formed after the true vocal folds. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nature of the molecular structure of plastics makes the properties of such materials markedly temperature dependent. In addition, the continuous increase in the utilization of polymeric materials in many specific applications has demanded knowledge of their physical properties, both during their processing as raw material, as well as over the working temperature range of the final polymer product. Thermal conductivity, thermal diffusivity and specific heat, namely the thermal properties, are the three most important physical properties of a material that are needed for heat transfer calculations. Recently, among several different methods for the determination of the thermal diffusivity and thermal conductivity, transient techniques have become the preferable way for measuring thermal properties of materials. In this work, a very simple and low cost variation of the well known Angstrom method is employed in the experimental determination of the thermal diffusivity of some selected polymers. Cylindrical shaped samples 3 cm diameter and 7 cm high were prepared by cutting from long cylindrical commercial bars. The reproducibility is very good, and the results obtained were checked against results obtained by the hot wire technique, laser flash technique, and when possible, they were also compared with data found in the literature. Thermal conductivity may be then derived from the thermal diffusivity with the knowledge of the bulk density and the specific heat, easily obtained by differential scanning calorimetry. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the manufacture of tubular ceramic membranes and the study of their performance in the demulsification of soybean oil/water emulsions. The membranes were made by iso-static pressing method and micro and macro structurally characterized by SEM, porosimetry by mercury intrusion and determination of apparent density and porosity. The microfiltration tests were realized on an experimental workbench, and fluid dynamic parameters, such as transmembrane flux and pressure were used to evaluate the process relative to the oil phase concentration (analysed by TOC measurements) in the permeate. The results showed that the membrane with pores` average diameter of 1.36 mu m achieved higher transmembrane flux than the membrane with pores` average diameter of 0.8 mu m. The volume of open pores (responsible for the permeation) was predominant in the total porosity, which was higher than 50% for all tested membranes. Concerning demulsification, the monolayer membranes were efficacious, as the rejection coefficient was higher than 99%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the manufacture of tubular UF and MF porous and supported ceramic membranes to oil/water emulsions demulsification. For such a purpose, a rigorous control was realized over the distribution and size of pores. Suspensions at 30 vol.% of solids (zirconia or alumina powder and sucrose) and 70 vol.% of liquids (isopropyl alcohol and PVB) were prepared in a jar mill varying the milling time of the sucrose particles, according to the pores size expected. The membranes were prepared by isostatic pressing method and structurally characterized by SEM, porosimetry by mercury intrusion and measurements of weight by immersion. The morphological characterization of the membranes identified the formation of porous zirconia and alumina membranes and supported membranes. The results of porosimetry analysis by mercury intrusion presented an average pore size of 1.8 mu m for the microfiltration porous membranes and for the ultrafiltration supported membranes, pores with average size of 0.01-0.03 mu m in the top-layer and 1.8 mu m in the support. By means of the manufacture method applied, it was possible to produce ultra and microfiltration membranes with high potential to be applied to the separation of oil/water emulsions. (C) 2011 Elsevier Ltd and Techna Group S.r.l. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The machining of hardened steels has always been a great challenge in metal cutting, particularly for drilling operations. Generally, drilling is the machining process that is most difficult to cool due to the tool`s geometry. The aim of this work is to determine the heat flux and the coefficient of convection in drilling using the inverse heat conduction method. Temperature was assessed during the drilling of hardened AISI H13 steel using the embedded thermocouple technique. Dry machining and two cooling/lubrication systems were used, and thermocouples were fixed at distances very close to the hole`s wall. Tests were replicated for each condition, and were carried out with new and worn drills. An analytical heat conduction model was used to calculate the temperature at tool-workpiece interface and to define the heat flux and the coefficient of convection. In all tests using new and worn out drills, the lowest temperatures and decrease of heat flux were observed using the flooded system, followed by the MQL, considering the dry condition as reference. The decrease of temperature was directly proportional to the amount of lubricant applied and was significant in the MQL system when compared to dry cutting. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an accurate and efficient solution for the random transverse and angular displacement fields of uncertain Timoshenko beams. Approximate, numerical solutions are obtained using the Galerkin method and chaos polynomials. The Chaos-Galerkin scheme is constructed by respecting the theoretical conditions for existence and uniqueness of the solution. Numerical results show fast convergence to the exact solution, at excellent accuracies. The developed Chaos-Galerkin scheme accurately approximates the complete cumulative distribution function of the displacement responses. The Chaos-Galerkin scheme developed herein is a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.