176 resultados para Distribution network reconfiguration problem
Resumo:
With the rapid increase in electrical energy demand, power generation in the form of distributed generation is becoming more important. However, the connections of distributed generators (DGs) to a distribution network or a microgrid can create several protection issues. The protection of these networks using protective devices based only on current is a challenging task due to the change in fault current levels and fault current direction. The isolation of a faulted segment from such networks will be difficult if converter interfaced DGs are connected as these DGs limit their output currents during the fault. Furthermore, if DG sources are intermittent, the current sensing protective relays are difficult to set since fault current changes with time depending on the availability of DG sources. The system restoration after a fault occurs is also a challenging protection issue in a converter interfaced DG connected distribution network or a microgrid. Usually, all the DGs will be disconnected immediately after a fault in the network. The safety of personnel and equipment of the distribution network, reclosing with DGs and arc extinction are the major reasons for these DG disconnections. In this thesis, an inverse time admittance (ITA) relay is proposed to protect a distribution network or a microgrid which has several converter interfaced DG connections. The ITA relay is capable of detecting faults and isolating a faulted segment from the network, allowing unfaulted segments to operate either in grid connected or islanded mode operations. The relay does not make the tripping decision based on only the fault current. It also uses the voltage at the relay location. Therefore, the ITA relay can be used effectively in a DG connected network in which fault current level is low or fault current level changes with time. Different case studies are considered to evaluate the performance of the ITA relays in comparison to some of the existing protection schemes. The relay performance is evaluated in different types of distribution networks: radial, the IEEE 34 node test feeder and a mesh network. The results are validated through PSCAD simulations and MATLAB calculations. Several experimental tests are carried out to validate the numerical results in a laboratory test feeder by implementing the ITA relay in LabVIEW. Furthermore, a novel control strategy based on fold back current control is proposed for a converter interfaced DG to overcome the problems associated with the system restoration. The control strategy enables the self extinction of arc if the fault is a temporary arc fault. This also helps in self system restoration if DG capacity is sufficient to supply the load. The coordination with reclosers without disconnecting the DGs from the network is discussed. This results in increased reliability in the network by reduction of customer outages.
Resumo:
An iterative based strategy is proposed for finding the optimal rating and location of fixed and switched capacitors in distribution networks. The substation Load Tap Changer tap is also set during this procedure. A Modified Discrete Particle Swarm Optimization is employed in the proposed strategy. The objective function is composed of the distribution line loss cost and the capacitors investment cost. The line loss is calculated using estimation of the load duration curve to multiple levels. The constraints are the bus voltage and the feeder current which should be maintained within their standard range. For validation of the proposed method, two case studies are tested. The first case study is the semi-urban 37-bus distribution system which is connected at bus 2 of the Roy Billinton Test System which is located in the secondary side of a 33/11 kV distribution substation. The second case is a 33 kV distribution network based on the modification of the 18-bus IEEE distribution system. The results are compared with prior publications to illustrate the accuracy of the proposed strategy.
Resumo:
Energy policy is driving renewable energy deployment with most of the developed countries having some form of renewable energy portfolio standard and emissions reduction target. To deliver upon these ambitious targets, those renewable energy technologies that are commercially available, such as wind and solar, are being deployed, but inherently have issues with intermittency of supply. To overcome these issues, storage options will need to be introduced into the distribution network with benefits for both demand management and power systems quality. How this can be utilised most effectively within the distribution network will allow for an even greater proportion of our energy demand to be met through renewable resources and meet the aspirational targets set. The distribution network will become a network of smart-grids, but to work efficiently and effectively, power quality issues surrounding intermittency must be overcome, with storage being a major factor in this solution.
Resumo:
This paper presents an efficient algorithm for optimizing the operation of battery storage in a low voltage distribution network with a high penetration of PV generation. A predictive control solution is presented that uses wavelet neural networks to predict the load and PV generation at hourly intervals for twelve hours into the future. The load and generation forecast, and the previous twelve hours of load and generation history, is used to assemble load profile. A diurnal charging profile can be compactly represented by a vector of Fourier coefficients allowing a direct search optimization algorithm to be applied. The optimal profile is updated hourly allowing the state of charge profile to respond to changing forecasts in load.
Resumo:
In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.
Resumo:
In November 2010, tension between Internet infrastructure companies boiled over in a dispute between content distribution network (CDN) Level 3 and Internet service provider (ISP) Comcast. Level 3, a distribution partner of Netflix, accused Comcast of violating the principles of net neutrality when the ISP increased distribution fees for carrying high bandwidth services. Comcast justified its actions by stating that the price increase was standard practice and argued Level 3 was trying to avoid paying its fair share. The dispute exemplifies the growing concern over the rising costs of streaming media services. The companies facing these inflated infrastructure costs are CDNs (Level 3, Equinix, Limelight, Akamai, and Voxel), companies that host streaming media content on server farms and distribute the content to a variety of carriers, and ISPs (Comcast, Time Warner, Cox, and AT&T), the cable and phone companies that provide “last mile” service to paying customers. Both CDNs and ISPs are lobbying government regulators to keep their costs at a minimum. The outcome of these disputes will influence the cost, quality, and legal status of streaming media.
Resumo:
Cool roof coatings have a beneficial impact on reducing the heat load of a range of building types, resulting in reduced cooling energy loads. This study seeks to understand the extent to which cool roof coatings could be used as a residential demand side management (DSM) strategy for retrofitting existing housing in a constrained network area in tropical Australia where peak electrical demand is heavily influenced by residential cooling loads. In particular this study seeks to determine whether simulation software used for building regulation purposes can provide networks with the ‘impact certainty’ required by their DSM principles. The building simulation method is supported by a field experiment. Both numerical and experimental data confirm reductions in total consumption (kWh) and energy demand (kW). The nature of the regulated simulation software, combined with the diverse nature of residential buildings and their patterns of occupancy, however, mean that simulated results cannot be extrapolated to quantify benefits to a broader distribution network. The study suggests that building data gained from regulatory simulations could be a useful guide for potential impacts of widespread application of cool roof coatings in this region. The practical realization of these positive impacts, however, would require changes to the current business model for the evaluation of DSM strategies. The study provides seven key recommendations that encourage distribution networks to think beyond their infrastructure boundaries, recognising that the broader energy system also includes buildings, appliances and people.
Resumo:
This paper presents a reliability-based reconfiguration methodology for power distribution systems. Probabilistic reliability models of the system components are considered and Monte Carlo method is used while evaluating the reliability of the distribution system. The reconfiguration is aimed at maximizing the reliability of the power supplied to the customers. A binary particle swarm optimization (BPSO) algorithm is used as a tool to determine the optimal configuration of the sectionalizing and tie switches in the system. The proposed methodology is applied on a modified IEEE 13-bus distribution system.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
Voltage drop and rise at network peak and off–peak periods along with voltage unbalance are the major power quality problems in low voltage distribution networks. Usually, the utilities try to use adjusting the transformer tap changers as a solution for the voltage drop. They also try to distribute the loads equally as a solution for network voltage unbalance problem. On the other hand, the ever increasing energy demand, along with the necessity of cost reduction and higher reliability requirements, are driving the modern power systems towards Distributed Generation (DG) units. This can be in the form of small rooftop photovoltaic cells (PV), Plug–in Electric Vehicles (PEVs) or Micro Grids (MGs). Rooftop PVs, typically with power levels ranging from 1–5 kW installed by the householders are gaining popularity due to their financial benefits for the householders. Also PEVs will be soon emerged in residential distribution networks which behave as a huge residential load when they are being charged while in their later generation, they are also expected to support the network as small DG units which transfer the energy stored in their battery into grid. Furthermore, the MG which is a cluster of loads and several DG units such as diesel generators, PVs, fuel cells and batteries are recently introduced to distribution networks. The voltage unbalance in the network can be increased due to the uncertainties in the random connection point of the PVs and PEVs to the network, their nominal capacity and time of operation. Therefore, it is of high interest to investigate the voltage unbalance in these networks as the result of MGs, PVs and PEVs integration to low voltage networks. In addition, the network might experience non–standard voltage drop due to high penetration of PEVs, being charged at night periods, or non–standard voltage rise due to high penetration of PVs and PEVs generating electricity back into the grid in the network off–peak periods. In this thesis, a voltage unbalance sensitivity analysis and stochastic evaluation is carried out for PVs installed by the householders versus their installation point, their nominal capacity and penetration level as different uncertainties. A similar analysis is carried out for PEVs penetration in the network working in two different modes: Grid to vehicle and Vehicle to grid. Furthermore, the conventional methods are discussed for improving the voltage unbalance within these networks. This is later continued by proposing new and efficient improvement methods for voltage profile improvement at network peak and off–peak periods and voltage unbalance reduction. In addition, voltage unbalance reduction is investigated for MGs and new improvement methods are proposed and applied for the MG test bed, planned to be established at Queensland University of Technology (QUT). MATLAB and PSCAD/EMTDC simulation softwares are used for verification of the analyses and the proposals.
Resumo:
In this paper a combined subtransmission and distribution reliability analysis of SEQEB’s outer suburban network is presented. The reliability analysis was carried out with a commercial software package which evaluates both energy and customer indices. Various reinforcement options were investigated to ascertain the impact they have on the reliability of supply seen by the customers. The customer and energy indices produced by the combined subtransmission and distribution reliability studies contributed to optimise capital expenditure to the most effective areas of the network.
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pairwise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighboring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools.
Resumo:
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.
Resumo:
Background Diabetes foot complications are a leading cause of overall avoidable hospital admissions. Since 2006, the Queensland Diabetes Clinical Network has implemented programs aimed at reducing diabetes-related hospitalisation. The aim of this retrospective observational study was to determine the incidence of diabetes foot-related hospital admissions in Queensland from 2005 to 2010. Methods Data on all primary diabetes foot-related admissions in Queensland from 2005-2010 was obtained using diabetes foot-related ICD-10-AM (hospital discharge) codes. Queensland diabetes foot-related admission incidences were calculated using general population data from the Australian Bureau of Statistics. Furthermore, diabetes foot-related sub-group admissions were analysed. Chi-squared tests were used to assess changes in admissions over time. Results Overall, 24,917 diabetes foot-related admissions occurred, resulting in the use of 260,085 bed days or 1.4% of all available Queensland hospital bed days (18,352,152). The primary reasons for these admissions were foot ulcers (49.8%), cellulitis (20.7%), peripheral vascular disease (17.8%) and osteomyelitis (3.8%). The diabetes foot-related admission incidence among the general population (per 100,000) reduced by 22% (103.0 in 2005, to 80.7 in 2010, p < 0.001); bed days decreased by 18% (1,099 to 904, p < 0.001). Conclusion Diabetes foot complications appear to be the primary reason for 1.4 out of every 100 hospital beds used in Queensland. There has been a significant reduction in the incidence of diabetes foot-related admissions in Queensland between 2005 and 2010. This decrease has coincided with a corresponding decrease in amputations and the implementation of several diabetes foot clinical programs throughout Queensland.