228 resultados para distribution (probability theory)
Resumo:
Power distribution automation and control are import-ant tools in the current restructured electricity markets. Unfortunately, due to its stochastic nature, distribution systems faults are hardly avoidable. This paper proposes a novel fault diagnosis scheme for power distribution systems, composed by three different processes: fault detection and classification, fault location, and fault section determination. The fault detection and classification technique is wavelet based. The fault-location technique is impedance based and uses local voltage and current fundamental phasors. The fault section determination method is artificial neural network based and uses the local current and voltage signals to estimate the faulted section. The proposed hybrid scheme was validated through Alternate Transient Program/Electromagentic Transients Program simulations and was implemented as embedded software. It is currently used as a fault diagnosis tool in a Southern Brazilian power distribution company.
Resumo:
In this study, further improvements regarding the fault location problem for power distribution systems are presented. The proposed improvements relate to the capacitive effect consideration on impedance-based fault location methods, by considering an exact line segment model for the distribution line. The proposed developments, which consist of a new formulation for the fault location problem and a new algorithm that considers the line shunt admittance matrix, are presented. The proposed equations are developed for any fault type and result in one single equation for all ground fault types, and another equation for line-to-line faults. Results obtained with the proposed improvements are presented. Also, in order to compare the improvements performance and demonstrate how the line shunt admittance affects the state-of-the-art impedance-based fault location methodologies for distribution systems, the results obtained with two other existing methods are presented. Comparative results show that, in overhead distribution systems with laterals and intermediate loads, the line shunt admittance can significantly affect the state-of-the-art methodologies response, whereas in this case the proposed developments present great improvements by considering this effect.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper shows a new hybrid method for risk assessment regarding interruptions in sensitive processes due to faults in electric power distribution systems. This method determines indices related to long duration interruptions and short duration voltage variations (SDVV), such as voltage sags and swells in each customer supplied by the distribution network. Frequency of such occurrences and their impact on customer processes are determined for each bus and classified according to their corresponding magnitude and duration. The method is based on information regarding network configuration, system parameters and protective devices. It randomly generates a number of fault scenarios in order to assess risk areas regarding long duration interruptions and voltage sags and swells in an especially inventive way, including frequency of events according to their magnitude and duration. Based on sensitivity curves, the method determines frequency indices regarding disruption in customer processes that represent equipment malfunction and possible process interruptions due to voltage sags and swells. Such approach allows for the assessment of the annual costs associated with each one of the evaluated power quality indices.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The concrete offshore platforms, which are subjected a several loading combinations and, thus, requires an analysis more generic possible, can be designed using the concepts adopted to shell elements, but the resistance must be verify in particular cross-sections to shear forces. This work about design of shell elements will be make using the three-layer shell theory. The elements are subject to combined loading of membrane and plate, totalizing eight components of internal forces, which are three membrane forces, three moments (two out-of-plane bending moments and one in-plane, or torsion, moment) and two shear forces. The design method adopted, utilizing the iterative process proposed by Lourenco & Figueiras (1993) obtained from equations of equilibrium developed by Gupta (1896) , will be compared to results of experimentally tested shell elements found in the literature using the program DIANA.
Resumo:
On February 6, 1994, a large debris flow developed because of intense rains in a 800-m-high mountain range called Serra do Cubatao, the local name for the Serra do Mar, located along the coast of the state of Sao Paulo, Brazil. It affected the Presidente Bernardes Refinery, owned by Petrobras, in Cubatao. The damages amounted to about US $40 million because of the muck cleaning, repairs, and 3-week interruption of the operations. This prompted Petrobras to conduct studies, carried out by the authors, to develop protection works, which were done at a cost of approximately US $12 million. The paper describes the studies conducted on debris flow mechanics. A new criteria to define rainfall intensities that trigger debris flows is presented, as well as a correlation of slipped area with soil porosity and rain intensity. Also presented are (a) an actual grain size distribution of a deposited material, determined by laboratory and a large-scale field test, and (b) the size distribution of large boulders along the river bed. Based on theory, empirical experience and back-analysis of the events, the main parameters as the front velocity, the peak discharge and the volume of the transported sediments were determined in a rational basis for the design of the protection works. Finally, the paper describes the set of the protection works built, emphasizing their concept and function. They also included some low-cost innovative works.
Resumo:
As many countries are moving toward water sector reforms, practical issues of how water management institutions can better effect allocation, regulation, and enforcement of water rights have emerged. The problem of nonavailability of water to tailenders on an irrigation system in developing countries, due to unlicensed upstream diversions is well documented. The reliability of access or equivalently the uncertainty associated with water availability at their diversion point becomes a parameter that is likely to influence the application by users for water licenses, as well as their willingness to pay for licensed use. The ability of a water agency to reduce this uncertainty through effective water rights enforcement is related to the fiscal ability of the agency to monitor and enforce licensed use. In this paper, this interplay across the users and the agency is explored, considering the hydraulic structure or sequence of water use and parameters that define the users and the agency`s economics. The potential for free rider behavior by the users, as well as their proposals for licensed use are derived conditional on this setting. The analyses presented are developed in the framework of the theory of ""Law and Economics,`` with user interactions modeled as a game theoretic enterprise. The state of Ceara, Brazil, is used loosely as an example setting, with parameter values for the experiments indexed to be approximately those relevant for current decisions. The potential for using the ideas in participatory decision making is discussed. This paper is an initial attempt to develop a conceptual framework for analyzing such situations but with a focus on the reservoir-canal system water rights enforcement.
Resumo:
This paper describes the development of an optimization model for the management and operation of a large-scale, multireservoir water supply distribution system with preemptive priorities. The model considers multiobjectives and hedging rules. During periods of drought, when water supply is insufficient to meet the planned demand, appropriate rationing factors are applied to reduce water supply. In this paper, a water distribution system is formulated as a network and solved by the GAMS modeling system for mathematical programming and optimization. A user-friendly interface is developed to facilitate the manipulation of data and to generate graphs and tables for decision makers. The optimization model and its interface form a decision support system (DSS), which can be used to configure a water distribution system to facilitate capacity expansion and reliability studies. Several examples are presented to demonstrate the utility and versatility of the developed DSS under different supply and demand scenarios, including applications to one of the largest water supply systems in the world, the Sao Paulo Metropolitan Area Water Supply Distribution System in Brazil.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Void fraction sensors are important instruments not only for monitoring two-phase flow, but for furnishing an important parameter for obtaining flow map pattern and two-phase flow heat transfer coefficient as well. This work presents the experimental results obtained with the analysis of two axially spaced multiple-electrode impedance sensors tested in an upward air-water two-phase flow in a vertical tube for void fraction measurements. An electronic circuit was developed for signal generation and post-treatment of each sensor signal. By phase shifting the electrodes supplying the signal, it was possible to establish a rotating electric field sweeping across the test section. The fundamental principle of using a multiple-electrode configuration is based on reducing signal sensitivity to the non-uniform cross-section void fraction distribution problem. Static calibration curves were obtained for both sensors, and dynamic signal analyses for bubbly, slug, and turbulent churn flows were carried out. Flow parameters such as Taylor bubble velocity and length were obtained by using cross-correlation techniques. As an application of the void fraction tested, vertical flow pattern identification could be established by using the probability density function technique for void fractions ranging from 0% to nearly 70%.
Resumo:
This paper presents concentration inequalities and laws of large numbers under weak assumptions of irrelevance that are expressed using lower and upper expectations. The results build upon De Cooman and Miranda`s recent inequalities and laws of large numbers. The proofs indicate connections between the theory of martingales and concepts of epistemic and regular irrelevance. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The ability to control both the minimum size of holes and the minimum size of structural members are essential requirements in the topology optimization design process for manufacturing. This paper addresses both requirements by means of a unified approach involving mesh-independent projection techniques. An inverse projection is developed to control the minimum hole size while a standard direct projection scheme is used to control the minimum length of structural members. In addition, a heuristic scheme combining both contrasting requirements simultaneously is discussed. Two topology optimization implementations are contributed: one in which the projection (either inverse or direct) is used at each iteration; and the other in which a two-phase scheme is explored. In the first phase, the compliance minimization is carried out without any projection until convergence. In the second phase, the chosen projection scheme is applied iteratively until a solution is obtained while satisfying either the minimum member size or minimum hole size. Examples demonstrate the various features of the projection-based techniques presented.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.
Resumo:
We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments. (C) 2007 Elsevier B.V. All rights reserved.