18 resultados para distribution network SCADA measurement point

em Universitat de Girona, Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La calidad de energía eléctrica incluye la calidad del suministro y la calidad de la atención al cliente. La calidad del suministro a su vez se considera que la conforman dos partes, la forma de onda y la continuidad. En esta tesis se aborda la continuidad del suministro a través de la localización de faltas. Este problema se encuentra relativamente resuelto en los sistemas de transmisión, donde por las características homogéneas de la línea, la medición en ambos terminales y la disponibilidad de diversos equipos, se puede localizar el sitio de falta con una precisión relativamente alta. En sistemas de distribución, sin embargo, la localización de faltas es un problema complejo y aún no resuelto. La complejidad es debida principalmente a la presencia de conductores no homogéneos, cargas intermedias, derivaciones laterales y desbalances en el sistema y la carga. Además, normalmente, en estos sistemas sólo se cuenta con medidas en la subestación, y un modelo simplificado del circuito. Los principales esfuerzos en la localización han estado orientados al desarrollo de métodos que utilicen el fundamental de la tensión y de la corriente en la subestación, para estimar la reactancia hasta la falta. Como la obtención de la reactancia permite cuantificar la distancia al sitio de falta a partir del uso del modelo, el Método se considera Basado en el Modelo (MBM). Sin embargo, algunas de sus desventajas están asociadas a la necesidad de un buen modelo del sistema y a la posibilidad de localizar varios sitios donde puede haber ocurrido la falta, esto es, se puede presentar múltiple estimación del sitio de falta. Como aporte, en esta tesis se presenta un análisis y prueba comparativa entre varios de los MBM frecuentemente referenciados. Adicionalmente se complementa la solución con métodos que utilizan otro tipo de información, como la obtenida de las bases históricas de faltas con registros de tensión y corriente medidos en la subestación (no se limita solamente al fundamental). Como herramienta de extracción de información de estos registros, se utilizan y prueban dos técnicas de clasificación (LAMDA y SVM). Éstas relacionan las características obtenidas de la señal, con la zona bajo falta y se denominan en este documento como Métodos de Clasificación Basados en el Conocimiento (MCBC). La información que usan los MCBC se obtiene de los registros de tensión y de corriente medidos en la subestación de distribución, antes, durante y después de la falta. Los registros se procesan para obtener los siguientes descriptores: a) la magnitud de la variación de tensión ( dV ), b) la variación de la magnitud de corriente ( dI ), c) la variación de la potencia ( dS ), d) la reactancia de falta ( Xf ), e) la frecuencia del transitorio ( f ), y f) el valor propio máximo de la matriz de correlación de corrientes (Sv), cada uno de los cuales ha sido seleccionado por facilitar la localización de la falta. A partir de estos descriptores, se proponen diferentes conjuntos de entrenamiento y validación de los MCBC, y mediante una metodología que muestra la posibilidad de hallar relaciones entre estos conjuntos y las zonas en las cuales se presenta la falta, se seleccionan los de mejor comportamiento. Los resultados de aplicación, demuestran que con la combinación de los MCBC con los MBM, se puede reducir el problema de la múltiple estimación del sitio de falta. El MCBC determina la zona de falta, mientras que el MBM encuentra la distancia desde el punto de medida hasta la falta, la integración en un esquema híbrido toma las mejores características de cada método. En este documento, lo que se conoce como híbrido es la combinación de los MBM y los MCBC, de una forma complementaria. Finalmente y para comprobar los aportes de esta tesis, se propone y prueba un esquema de integración híbrida para localización de faltas en dos sistemas de distribución diferentes. Tanto los métodos que usan los parámetros del sistema y se fundamentan en la estimación de la impedancia (MBM), como aquellos que usan como información los descriptores y se fundamentan en técnicas de clasificación (MCBC), muestran su validez para resolver el problema de localización de faltas. Ambas metodologías propuestas tienen ventajas y desventajas, pero según la teoría de integración de métodos presentada, se alcanza una alta complementariedad, que permite la formulación de híbridos que mejoran los resultados, reduciendo o evitando el problema de la múltiple estimación de la falta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitor a distribution network implies working with a huge amount of data coining from the different elements that interact in the network. This paper presents a visualization tool that simplifies the task of searching the database for useful information applicable to fault management or preventive maintenance of the network

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis aims to understand the processes of entrepreneurship that try to create businesses or products with a high degree of complexity. This complexity comes from the fact that these products or initiatives can only be viable with the concurrence of a large number of heterogeneous actors (public, private, from different regions, etc..) which interact in a relational context. A case with these characteristics is the Camí dels Bons Homes. The thesis analyzes the evolution of the relational network from the point of view of its structure and content of its links. The results show and explain the observed changes in the network structure and the changes in the ties content. This analysis of the content of ties contributes to a new systematization and operationalization of ties’ content. Moreover this analysis takes in account negative ties, a less discussed issue in literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a new charging scheme for cost distribution along a point-to-multipoint connection when destination nodes are responsible for the cost. The scheme focus on QoS considerations and a complete range of choices is presented. These choices go from a safe scheme for the network operator to a fair scheme to the customer. The in-between cases are also covered. Specific and general problems, like the incidence of users disconnecting dynamically is also discussed. The aim of this scheme is to encourage the users to disperse the resource demand instead of having a large number of direct connections to the source of the data, which would result in a higher than necessary bandwidth use from the source. This would benefit the overall performance of the network. The implementation of this task must balance between the necessity to offer a competitive service and the risk of not recovering such service cost for the network operator. Throughout this paper reference to multicast charging is made without making any reference to any specific category of service. The proposed scheme is also evaluated with the criteria set proposed in the European ATM charging project CANCAN

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our purpose in this article is to define a network structure which is based on two egos instead of the egocentered (one ego) or the complete network (n egos). We describe the characteristics and properties for this kind of network which we call “nosduocentered network”, comparing it with complete and egocentered networks. The key point for this kind of network is that relations exist between the two main egos and all alters, but relations among others are not observed. After that, we use new social network measures adapted to the nosduocentered network, some of which are based on measures for complete networks such as degree, betweenness, closeness centrality or density, while some others are tailormade for nosduocentered networks. We specify three regression models to predict research performance of PhD students based on these social network measures for different networks such as advice, collaboration, emotional support and trust. Data used are from Slovenian PhD students and their s

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Dirichlet family owes its privileged status within simplex distributions to easyness of interpretation and good mathematical properties. In particular, we recall fundamental properties for the analysis of compositional data such as closure under amalgamation and subcomposition. From a probabilistic point of view, it is characterised (uniquely) by a variety of independence relationships which makes it indisputably the reference model for expressing the non trivial idea of substantial independence for compositions. Indeed, its well known inadequacy as a general model for compositional data stems from such an independence structure together with the poorness of its parametrisation. In this paper a new class of distributions (called Flexible Dirichlet) capable of handling various dependence structures and containing the Dirichlet as a special case is presented. The new model exhibits a considerably richer parametrisation which, for example, allows to model the means and (part of) the variance-covariance matrix separately. Moreover, such a model preserves some good mathematical properties of the Dirichlet, i.e. closure under amalgamation and subcomposition with new parameters simply related to the parent composition parameters. Furthermore, the joint and conditional distributions of subcompositions and relative totals can be expressed as simple mixtures of two Flexible Dirichlet distributions. The basis generating the Flexible Dirichlet, though keeping compositional invariance, shows a dependence structure which allows various forms of partitional dependence to be contemplated by the model (e.g. non-neutrality, subcompositional dependence and subcompositional non-invariance), independence cases being identified by suitable parameter configurations. In particular, within this model substantial independence among subsets of components of the composition naturally occurs when the subsets have a Dirichlet distribution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present Very Long Baseline Interferometry (VLBI) observations of the high mass X-ray binary LS I +61˚303, carried out with the European VLBI Network (EVN). Over the 11 hour observing run, performed ~10 days after a radio outburst, the radio source showed a constant flux density, which allowed sensitive imaging of the emission distribution. The structure in the map shows a clear extension to the southeast. Comparing our data with previous VLBI observations we interpret the extension as a collimated radio jet as found in several other X-ray binaries. Assuming that the structure is the result of an expansion that started at the onset of the outburst, we derive an apparent expansion velocity of 0:003 c, which, in the context of Doppler boosting, corresponds to an intrinsic velocity of at least 0:4 c for an ejection close to the line of sight. From the apparent velocity in all available epochs we are able to establish variations in the ejection angle which imply a precessing accretion disk. Finally we point out that LS I +61˚303, like SS 433 and Cygnus X-1, shows evidence for an emission region almost orthogonal to the relativistic jet

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this paper belongs to the power quality knowledge area and deals with the voltage sags in power transmission and distribution systems. Propagating throughout the power network, voltage sags can cause plenty of problems for domestic and industrial loads that can financially cost a lot. To impose penalties to responsible party and to improve monitoring and mitigation strategies, sags must be located in the power network. With such a worthwhile objective, this paper comes up with a new method for associating a sag waveform with its origin in transmission and distribution networks. It solves this problem through developing hybrid methods which hire multiway principal component analysis (MPCA) as a dimension reduction tool. MPCA reexpresses sag waveforms in a new subspace just in a few scores. We train some well-known classifiers with these scores and exploit them for classification of future sags. The capabilities of the proposed method for dimension reduction and classification are examined using the real data gathered from three substations in Catalonia, Spain. The obtained classification rates certify the goodness and powerfulness of the developed hybrid methods as brand-new tools for sag classification

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compositional data, also called multiplicative ipsative data, are common in survey research instruments in areas such as time use, budget expenditure and social networks. Compositional data are usually expressed as proportions of a total, whose sum can only be 1. Owing to their constrained nature, statistical analysis in general, and estimation of measurement quality with a confirmatory factor analysis model for multitrait-multimethod (MTMM) designs in particular are challenging tasks. Compositional data are highly non-normal, as they range within the 0-1 interval. One component can only increase if some other(s) decrease, which results in spurious negative correlations among components which cannot be accounted for by the MTMM model parameters. In this article we show how researchers can use the correlated uniqueness model for MTMM designs in order to evaluate measurement quality of compositional indicators. We suggest using the additive log ratio transformation of the data, discuss several approaches to deal with zero components and explain how the interpretation of MTMM designs di ers from the application to standard unconstrained data. We show an illustration of the method on data of social network composition expressed in percentages of partner, family, friends and other members in which we conclude that the faceto-face collection mode is generally superior to the telephone mode, although primacy e ects are higher in the face-to-face mode. Compositions of strong ties (such as partner) are measured with higher quality than those of weaker ties (such as other network members)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Els sistemes aquàtics continental representen un dels ecosistemes més amenaçats a nivell mundial, com a conseqüència de l'ús intensiu quel'home en fa. La conca del Guadiana no està lliure d'aquestes pressions antròpiques. Les grans infraestructures hidràuliques i l'escorrentia provinent de l'agricultura són només exemples dels greus problemes que pateix la conca. Aquests problemes es fan especialment palesos en la zona alta de la conca, on l'escassetat d'aigua no fa més que agreujar el problema.Tot això ha generat la necessitat urgent d'avaluar l'estat de conservació d'aquests ecosistemes aquàtics continentals, poder determinar la mesura i la magnitud de les pertorbacions que els estan afectant i així proposar mesures de gestió destinades a restaurar-ne la integritat ecològica. El principal objectiu que presenta aquest és determinar els patrons de distribució de les comunitats de algals (amb una menció especial en el grup de les diatomees) i de les seves causes en la conca del Guadiana i associades, amb la finalitat d'establir i proposar eines que permetin avaluar l'estat de conservació de les masses d'aigua d'aquestes conques.