47 resultados para electricity distribution networks
Resumo:
The dynamics of Boolean networks (BN) with quenched disorder and thermal noise is studied via the generating functional method. A general formulation, suitable for BN with any distribution of Boolean functions, is developed. It provides exact solutions and insight into the evolution of order parameters and properties of the stationary states, which are inaccessible via existing methodology. We identify cases where the commonly used annealed approximation is valid and others where it breaks down. Broader links between BN and general Boolean formulas are highlighted.
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analyzing network failures caused by hardware faults or overload, where the network reaction was modeled as rerouting of traffic away from failed or congested elements. Here we model another type of the network reaction to congestion - a sharp reduction of the input traffic rate through congested routes which occurs on much shorter time scales. We consider the onset of congestion in the Internet where local mismatch between demand and capacity results in traffic losses and show that it can be described as a phase transition characterized by strong non-Gaussian loss fluctuations at a mesoscopic time scale. The fluctuations, caused by noise in input traffic, are exacerbated by the heterogeneous nature of the network manifested in a scale-free load distribution. They result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © Copyright EPLA, 2012.
Resumo:
Erbium-doped fibre amplifiers (EDFA’s) are a key technology for the design of all optical communication systems and networks. The superiority of EDFAs lies in their negligible intermodulation distortion across high speed multichannel signals, low intrinsic losses, slow gain dynamics, and gain in a wide range of optical wavelengths. Due to long lifetime in excited states, EDFAs do not oppose the effect of cross-gain saturation. The time characteristics of the gain saturation and recovery effects are between a few hundred microseconds and 10 milliseconds. However, in wavelength division multiplexed (WDM) optical networks with EDFAs, the number of channels traversing an EDFA can change due to the faulty link of the network or the system reconfiguration. It has been found that, due to the variation in channel number in the EDFAs chain, the output system powers of surviving channels can change in a very short time. Thus, the power transient is one of the problems deteriorating system performance. In this thesis, the transient phenomenon in wavelength routed WDM optical networks with EDFA chains was investigated. The task was performed using different input signal powers for circuit switched networks. A simulator for the EDFA gain dynamicmodel was developed to compute the magnitude and speed of the power transients in the non-self-saturated EDFA both single and chained. The dynamic model of the self-saturated EDFAs chain and its simulator were also developed to compute the magnitude and speed of the power transients and the Optical signal-to-noise ratio (OSNR). We found that the OSNR transient magnitude and speed are a function of both the output power transient and the number of EDFAs in the chain. The OSNR value predicts the level of the quality of service in the related network. It was found that the power transients for both self-saturated and non-self-saturated EDFAs are close in magnitude in the case of gain saturated EDFAs networks. Moreover, the cross-gain saturation also degrades the performance of the packet switching networks due to varying traffic characteristics. The magnitude and the speed of output power transients increase along the EDFAs chain. An investigation was done on the asynchronous transfer mode (ATM) or the WDM Internet protocol (WDM-IP) traffic networks using different traffic patterns based on the Pareto and Poisson distribution. The simulator is used to examine the amount and speed of the power transients in Pareto and Poisson distributed traffic at different bit rates, with specific focus on 2.5 Gb/s. It was found from numerical and statistical analysis that the power swing increases if the time interval of theburst-ON/burst-OFF is long in the packet bursts. This is because the gain dynamics is fast during strong signal pulse or with long duration pulses, which is due to the stimulatedemission avalanche depletion of the excited ions. Thus, an increase in output power levelcould lead to error burst which affects the system performance.
Resumo:
This paper investigates vertical economies between generation and distribution of electric power, and horizontal economies between different types of power generation in the U.S. electric utility industry. Our quadratic cost function model includes three generation output measures (hydro, nuclear and fossil fuels), which allows us to analyze the effect that generation mix has on vertical economies. Our results provide (sample mean) estimates of vertical economies of 8.1% and horizontal economies of 5.4%. An extensive sensitivity analysis is used to show how the scope measures vary across alternative model specifications and firm types. © 2012 Blackwell Publishing Ltd and the Editorial Board of The Journal of Industrial Economics.
Resumo:
In order to increase the capacity of the existing Low Voltage grid, one solution is to increase the nominal residential network voltage from 230 V to 300 V, which is easily accommodated within the voltage rating of existing infrastructure such as cabling. A power electronic AC-AC converter would then be used to step the voltage back down to 230 V at an individual property. Such equipment could also be used to provide power quality improvements on both the utility and customer side of the converter depending on its topology. This paper provides an overview of a project which is looking at the development of such a device. The project is being carried out in collaboration with the local UK, Distribution Network Operator (DNO).
Resumo:
Dynamic asset rating (DAR) is one of the number of techniques that could be used to facilitate low carbon electricity network operation. Previous work has looked at this technique from an asset perspective. This paper focuses, instead, from a network perspective by proposing a dynamic network rating (DNR) approach. The models available for use with DAR are discussed and compared using measured load and weather data from a trial network area within Milton Keynes in the central area of the U.K. This paper then uses the most appropriate model to investigate, through a network case study, the potential gains in dynamic rating compared to static rating for the different network assets - transformers, overhead lines, and cables. This will inform the network operator of the potential DNR gains on an 11-kV network with all assets present and highlight the limiting assets within each season.
Resumo:
A real-time adaptive resource allocation algorithm considering the end user's Quality of Experience (QoE) in the context of video streaming service is presented in this work. An objective no-reference quality metric, namely Pause Intensity (PI), is used to control the priority of resource allocation to users during the scheduling process. An online adjustment has been introduced to adaptively set the scheduler's parameter and maintain a desired trade-off between fairness and efficiency. The correlation between the data rates (i.e. video code rates) demanded by users and the data rates allocated by the scheduler is taken into account as well. The final allocated rates are determined based on the channel status, the distribution of PI values among users, and the scheduling policy adopted. Furthermore, since the user's capability varies as the environment conditions change, the rate adaptation mechanism for video streaming is considered and its interaction with the scheduling process under the same PI metric is studied. The feasibility of implementing this algorithm is examined and the result is compared with the most commonly existing scheduling methods.
Resumo:
Dynamic asset rating is one of a number of techniques that could be used to facilitate low carbon electricity network operation. This paper focusses on distribution level transformer dynamic rating under this context. The models available for use with dynamic asset rating are discussed and compared using measured load and weather conditions from a trial Network area within Milton Keynes. The paper then uses the most appropriate model to investigate, through simulation, the potential gains in dynamic rating compared to static rating under two transformer cooling methods to understand the potential gain to the Network Operator.
Resumo:
We show theoretically and experimentally a mechanismbehind the emergence of wide or bimodal protein distributions in biochemical networks with nonlinear input-output characteristics (the dose-response curve) and variability in protein abundance. Large cell-to-cell variation in the nonlinear dose-response characteristics can be beneficial to facilitate two distinct groups of response levels as opposed to a graded response. Under the circumstances that we quantify mathematically, the two distinct responses can coexist within a cellular population, leading to the emergence of a bimodal protein distribution. Using flow cytometry, we demonstrate the appearance of wide distributions in the hypoxia-inducible factor-mediated response network in HCT116 cells. With help of our theoretical framework, we perform a novel calculation of the magnitude of cell-to-cell heterogeneity in the dose-response obtained experimentally. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
The inverse controller is traditionally assumed to be a deterministic function. This paper presents a pedagogical methodology for estimating the stochastic model of the inverse controller. The proposed method is based on Bayes' theorem. Using Bayes' rule to obtain the stochastic model of the inverse controller allows the use of knowledge of uncertainty from both the inverse and the forward model in estimating the optimal control signal. The paper presents the methodology for general nonlinear systems and is demonstrated on nonlinear single-input-single-output (SISO) and multiple-input-multiple-output (MIMO) examples. © 2006 IEEE.
A simulation analysis of spoke-terminals operating in LTL Hub-and-Spoke freight distribution systems
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT The research presented in this thesis is concerned with Discrete-Event Simulation (DES) modelling as a method to facilitate logistical policy development within the UK Less-than-Truckload (LTL) freight distribution sector which has been typified by “Pallet Networks” operating on a hub-and-spoke philosophy. Current literature relating to LTL hub-and-spoke and cross-dock freight distribution systems traditionally examines a variety of network and hub design configurations. Each is consistent with classical notions of creating process efficiency, improving productivity, reducing costs and generally creating economies of scale through notions of bulk optimisation. Whilst there is a growing abundance of papers discussing both the network design and hub operational components mentioned above, there is a shortcoming in the overall analysis when it comes to discussing the “spoke-terminal” of hub-and-spoke freight distribution systems and their capabilities for handling the diverse and discrete customer profiles of freight that multi-user LTL hub-and-spoke networks typically handle over the “last-mile” of the delivery, in particular, a mix of retail and non-retail customers. A simulation study is undertaken to investigate the impact on operational performance when the current combined spoke-terminal delivery tours are separated by ‘profile-type’ (i.e. retail or nonretail). The results indicate that a potential improvement in delivery performance can be made by separating retail and non-retail delivery runs at the spoke-terminal and that dedicated retail and non-retail delivery tours could be adopted in order to improve customer delivery requirements and adapt hub-deployed policies. The study also leverages key operator experiences to highlight the main practical implementation challenges when integrating the observed simulation results into the real-world. The study concludes that DES be harnessed as an enabling device to develop a ‘guide policy’. This policy needs to be flexible and should be applied in stages, taking into account the growing retail-exposure.
Resumo:
With the advent of GPS enabled smartphones, an increasing number of users is actively sharing their location through a variety of applications and services. Along with the continuing growth of Location-Based Social Networks (LBSNs), security experts have increasingly warned the public of the dangers of exposing sensitive information such as personal location data. Most importantly, in addition to the geographical coordinates of the user’s location, LBSNs allow easy access to an additional set of characteristics of that location, such as the venue type or popularity. In this paper, we investigate the role of location semantics in the identification of LBSN users. We simulate a scenario in which the attacker’s goal is to reveal the identity of a set of LBSN users by observing their check-in activity. We then propose to answer the following question: what are the types of venues that a malicious user has to monitor to maximize the probability of success? Conversely, when should a user decide whether to make his/her check-in to a location public or not? We perform our study on more than 1 million check-ins distributed over 17 urban regions of the United States. Our analysis shows that different types of venues display different discriminative power in terms of user identity, with most of the venues in the “Residence” category providing the highest re-identification success across the urban regions. Interestingly, we also find that users with a high entropy of their check-ins distribution are not necessarily the hardest to identify, suggesting that it is the collective behaviour of the users’ population that determines the complexity of the identification task, rather than the individual behaviour.
Resumo:
This issue of Philosophical Transactions of the Royal Society, Part A represents a summary of the recent discussion meeting 'Communication networks beyond the capacity crunch'. The purpose of the meeting was to establish the nature of the capacity crunch, estimate the time scales associated with it and to begin to find solutions to enable continued growth in a post-crunch era. The meeting confirmed that, in addition to a capacity shortage within a single optical fibre, many other 'crunches' are foreseen in the field of communications, both societal and technical. Technical crunches identified included the nonlinear Shannon limit, wireless spectrum, distribution of 5G signals (front haul and back haul), while societal influences included net neutrality, creative content generation and distribution and latency, and finally energy and cost. The meeting concluded with the observation that these many crunches are genuine and may influence our future use of technology, but encouragingly noted that research and business practice are already moving to alleviate many of the negative consequences.
Resumo:
It is important to help researchers find valuable papers from a large literature collection. To this end, many graph-based ranking algorithms have been proposed. However, most of these algorithms suffer from the problem of ranking bias. Ranking bias hurts the usefulness of a ranking algorithm because it returns a ranking list with an undesirable time distribution. This paper is a focused study on how to alleviate ranking bias by leveraging the heterogeneous network structure of the literature collection. We propose a new graph-based ranking algorithm, MutualRank, that integrates mutual reinforcement relationships among networks of papers, researchers, and venues to achieve a more synthetic, accurate, and less-biased ranking than previous methods. MutualRank provides a unified model that involves both intra- and inter-network information for ranking papers, researchers, and venues simultaneously. We use the ACL Anthology Network as the benchmark data set and construct the gold standard from computer linguistics course websites of well-known universities and two well-known textbooks. The experimental results show that MutualRank greatly outperforms the state-of-the-art competitors, including PageRank, HITS, CoRank, Future Rank, and P-Rank, in ranking papers in both improving ranking effectiveness and alleviating ranking bias. Rankings of researchers and venues by MutualRank are also quite reasonable.