37 resultados para distribution networks
em Aston University Research Archive
Resumo:
We propose a simple model that captures the salient properties of distribution networks, and study the possible occurrence of blackouts, i.e., sudden failings of large portions of such networks. The model is defined on a random graph of finite connectivity. The nodes of the graph represent hubs of the network, while the edges of the graph represent the links of the distribution network. Both, the nodes and the edges carry dynamical two state variables representing the functioning or dysfunctional state of the node or link in question. We describe a dynamical process in which the breakdown of a link or node is triggered when the level of maintenance it receives falls below a given threshold. This form of dynamics can lead to situations of catastrophic breakdown, if levels of maintenance are themselves dependent on the functioning of the net, once maintenance levels locally fall below a critical threshold due to fluctuations. We formulate conditions under which such systems can be analyzed in terms of thermodynamic equilibrium techniques, and under these conditions derive a phase diagram characterizing the collective behavior of the system, given its model parameters. The phase diagram is confirmed qualitatively and quantitatively by simulations on explicit realizations of the graph, thus confirming the validity of our approach. © 2007 The American Physical Society.
Resumo:
This thesis presents an analysis of the stability of complex distribution networks. We present a stability analysis against cascading failures. We propose a spin [binary] model, based on concepts of statistical mechanics. We test macroscopic properties of distribution networks with respect to various topological structures and distributions of microparameters. The equilibrium properties of the systems are obtained in a statistical mechanics framework by application of the replica method. We demonstrate the validity of our approach by comparing it with Monte Carlo simulations. We analyse the network properties in terms of phase diagrams and found both qualitative and quantitative dependence of the network properties on the network structure and macroparameters. The structure of the phase diagrams points at the existence of phase transition and the presence of stable and metastable states in the system. We also present an analysis of robustness against overloading in the distribution networks. We propose a model that describes a distribution process in a network. The model incorporates the currents between any connected hubs in the network, local constraints in the form of Kirchoff's law and a global optimizational criterion. The flow of currents in the system is driven by the consumption. We study two principal types of model: infinite and finite link capacity. The key properties are the distributions of currents in the system. We again use a statistical mechanics framework to describe the currents in the system in terms of macroscopic parameters. In order to obtain observable properties we apply the replica method. We are able to assess the criticality of the level of demand with respect to the available resources and the architecture of the network. Furthermore, the parts of the system, where critical currents may emerge, can be identified. This, in turn, provides us with the characteristic description of the spread of the overloading in the systems.
Resumo:
The total thermoplastics pipe market in west Europe is estimated at 900,000 metric tonnes for 1977 and is projected to grow to some 1.3 million tonnes of predominantly PVC and polyolefins pipe by 1985. By that time, polyethylene for gas distribution pipe and fittings will represent some 30% of the total polyethylene pipe market. The performance characteristics of a high density polyethylene are significantly influenced by both molecular weight and type of comonomer; the major influences being in the long-term hoop stress resistance and the environmental stress cracking resistance. Minor amounts of hexene-1 are more effective than comonomers lower in the homologous series, although there is some sacrifice of density related properties. A synergistic improvement is obtained by combining molecular weight increase with copolymerisation. The Long-term design strength of polyethylene copolymers can be determined from hoop stress measurement at elevated temperatures and by means of a separation factor of approximate value 22, extrapolation can be made to room temperature performance for a water environment. A polyethylene of black composition has a sufficiently improved performance over yellow pigmented pipe to cast doubts on the validity of internationally specifying yellow coded pipe for gas distribution service. The chemical environment (condensate formation) that can exist in natural gas distribution networks has a deleterious effect on the pipe performance the reduction amounting to at least two decades in log time. Desorption of such condensate is very slow and the influence of the more aggressive aromatic components is to lead to premature stress cracking. For natural gas distribution purposes, the design stress rating should be 39 Kg/cm2 for polyethylenes in the molecular weight range of 150 - 200,000 and 55 Kg/cm2 for higher molecular weight materials.
Resumo:
This paper discusses the potentiality of reconfiguring distribution networks into islanded Microgrids to reduce the network infrastructure reinforcement requirement and incorporate various dispersed energy resources. The major challenge would be properly breaking down the network and its resultant protection and automation system changes. A reconfiguration method is proposed based on allocation of distributed generation resources to fulfil this purpose, with a heuristic algorithm. Cost/reliability data is required for the next stage tasks to realise a case study of a particular network.
Resumo:
This paper reports potential benefits around dynamic thermal rating prediction of primary transformers within Western Power Distribution (WPD) managed Project FALCON (Flexible Approaches to Low Carbon Optimised Networks). Details of the thermal modelling, parameter optimisation and results validation are presented with asset and environmental data (measured and day/week-ahead forecast) which are used for determining dynamic ampacity. Detailed analysis of ratings and benefits and confidence in ability to accurately predict dynamic ratings are presented. Investigating the effect of sustained ONAN rating compared to a dynamic rating shows that there is scope to increase sustained ratings under ONAN operating conditions by up to 10% higher between December and March with a high degree of confidence. However, under high ambient temperature conditions this dynamic rating may also reduce in the summer months.
Resumo:
The realisation of an eventual low-voltage (LV) Smart Grid with a complete communication infrastructure is a gradual process. During this evolution the protection scheme of distribution networks should be continuously adapted and optimised to fit the protection and cost requirements at the time. This paper aims to review practices and research around the design of an effective, adaptive and economical distribution network protection scheme. The background of this topic is introduced and potential problems are defined from conventional protection theories and new Smart Grid technologies. Challenges are identified with possible solutions defined as a pathway to the ultimate flexible and reliable LV protection systems.
Resumo:
Conventional DEA models assume deterministic, precise and non-negative data for input and output observations. However, real applications may be characterized by observations that are given in form of intervals and include negative numbers. For instance, the consumption of electricity in decentralized energy resources may be either negative or positive, depending on the heat consumption. Likewise, the heat losses in distribution networks may be within a certain range, depending on e.g. external temperature and real-time outtake. Complementing earlier work separately addressing the two problems; interval data and negative data; we propose a comprehensive evaluation process for measuring the relative efficiencies of a set of DMUs in DEA. In our general formulation, the intervals may contain upper or lower bounds with different signs. The proposed method determines upper and lower bounds for the technical efficiency through the limits of the intervals after decomposition. Based on the interval scores, DMUs are then classified into three classes, namely, the strictly efficient, weakly efficient and inefficient. An intuitive ranking approach is presented for the respective classes. The approach is demonstrated through an application to the evaluation of bank branches. © 2013.
Resumo:
This paper looks at how automatic load transfer may be used as a possible planning tool to help deliver faster connections for customers. A trial on an area of overhead line Network is presented to show how improvements in % feeder utilisation may be realised by changing the location of the open point. The reported Network data is compared to calculated data under two different configurations over a two week trial period. The results show that ALT open point determination in the presence of generation is different from a load only circuit and that the open points may not be fixed with time. Looking at improvements in Network headroom may not be conducive to other improvements in the network such as loss reduction or improving voltage profiles.
Resumo:
This paper describes the potential of pre-setting 11kV overhead line ratings over a time period of sufficient length to be useful to the real-time management of overhead lines. This forecast is based on short and long term freely available weather forecasts and is used to help investigate the potential for realising dynamic rating benefits on the electricity network. A comparison between the realisable benefits in ratings using this forecast data, over the period of a year has been undertaken.
Resumo:
Over the last decade, there has been a trend where water utility companies aim to make water distribution networks more intelligent in order to improve their quality of service, reduce water waste, minimize maintenance costs etc., by incorporating IoT technologies. Current state of the art solutions use expensive power hungry deployments to monitor and transmit water network states periodically in order to detect anomalous behaviors such as water leakage and bursts. However, more than 97% of water network assets are remote away from power and are often in geographically remote underpopulated areas, facts that make current approaches unsuitable for next generation more dynamic adaptive water networks. Battery-driven wireless sensor/actuator based solutions are theoretically the perfect choice to support next generation water distribution. In this paper, we present an end-to-end water leak localization system, which exploits edge processing and enables the use of battery-driven sensor nodes. Our system combines a lightweight edge anomaly detection algorithm based on compression rates and an efficient localization algorithm based on graph theory. The edge anomaly detection and localization elements of the systems produce a timely and accurate localization result and reduce the communication by 99% compared to the traditional periodic communication. We evaluated our schemes by deploying non-intrusive sensors measuring vibrational data on a real-world water test rig that have had controlled leakage and burst scenarios implemented.
Resumo:
The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.
Resumo:
In this letter, we propose an analytical approach to model uplink intercell interference (ICI) in hexagonal grid based orthogonal frequency division multiple access (OFMDA) cellular networks. The key idea is that the uplink ICI from individual cells is approximated with a lognormal distribution with statistical parameters being determined analytically. Accordingly, the aggregated uplink ICI is approximated with another lognormal distribution and its statistical parameters can be determined from those of individual cells using Fenton-Wilkson method. Analytic expressions of uplink ICI are derived with two traditional frequency reuse schemes, namely integer frequency reuse schemes with factor 1 (IFR-1) and factor 3 (IFR-3). Uplink fractional power control and lognormal shadowing are modeled. System performances in terms of signal to interference plus noise ratio (SINR) and spectrum efficiency are also derived. The proposed model has been validated by simulations. © 2013 IEEE.
Resumo:
A framework that aims to best utilize the mobile network resources for video applications is presented in this paper. The main contribution of the work proposed is the QoE-driven optimization method that can maintain a desired trade-off between fairness and efficiency in allocating resources in terms of data rates to video streaming users in LTE networks. This method is concerned with the control of the user satisfaction level from the service continuity's point of view and applies appropriate QoE metrics (Pause Intensity and variations) to determine the scheduling strategies in combination with the mechanisms used for adaptive video streaming such as 3GP/MPEG-DASH. The superiority of the proposed algorithms are demonstrated, showing how the resources of a mobile network can be optimally utilized by using quantifiable QoE measurements. This approach can also find the best match between demand and supply in the process of network resource distribution.
Resumo:
Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.
Resumo:
Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.