938 resultados para electricity distribution network
Resumo:
This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.
Resumo:
An experimental comparison of information features used by neural network is performed. The sensing method was used. Suboptimal classifier agreeable to the gaussian model of the training data was used as a probe. Neural nets with architectures of perceptron and feedforward net with one hidden layer were used. The experiments were carried out with spatial ultrasonic data, which are used for car’s passenger safety system neural controller learning. In this paper we show that a neural network doesn’t fully make use of gaussian components, which are first two moment coefficients of probability distribution. On the contrary, the network can find more complicated regularities inside data vectors and thus shows better results than suboptimal classifier. The parallel connection of suboptimal classifier improves work of modular neural network whereas its connection to the network input improves the specialization effect during training.
A simulation analysis of spoke-terminals operating in LTL Hub-and-Spoke freight distribution systems
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT The research presented in this thesis is concerned with Discrete-Event Simulation (DES) modelling as a method to facilitate logistical policy development within the UK Less-than-Truckload (LTL) freight distribution sector which has been typified by “Pallet Networks” operating on a hub-and-spoke philosophy. Current literature relating to LTL hub-and-spoke and cross-dock freight distribution systems traditionally examines a variety of network and hub design configurations. Each is consistent with classical notions of creating process efficiency, improving productivity, reducing costs and generally creating economies of scale through notions of bulk optimisation. Whilst there is a growing abundance of papers discussing both the network design and hub operational components mentioned above, there is a shortcoming in the overall analysis when it comes to discussing the “spoke-terminal” of hub-and-spoke freight distribution systems and their capabilities for handling the diverse and discrete customer profiles of freight that multi-user LTL hub-and-spoke networks typically handle over the “last-mile” of the delivery, in particular, a mix of retail and non-retail customers. A simulation study is undertaken to investigate the impact on operational performance when the current combined spoke-terminal delivery tours are separated by ‘profile-type’ (i.e. retail or nonretail). The results indicate that a potential improvement in delivery performance can be made by separating retail and non-retail delivery runs at the spoke-terminal and that dedicated retail and non-retail delivery tours could be adopted in order to improve customer delivery requirements and adapt hub-deployed policies. The study also leverages key operator experiences to highlight the main practical implementation challenges when integrating the observed simulation results into the real-world. The study concludes that DES be harnessed as an enabling device to develop a ‘guide policy’. This policy needs to be flexible and should be applied in stages, taking into account the growing retail-exposure.
Resumo:
A framework that aims to best utilize the mobile network resources for video applications is presented in this paper. The main contribution of the work proposed is the QoE-driven optimization method that can maintain a desired trade-off between fairness and efficiency in allocating resources in terms of data rates to video streaming users in LTE networks. This method is concerned with the control of the user satisfaction level from the service continuity's point of view and applies appropriate QoE metrics (Pause Intensity and variations) to determine the scheduling strategies in combination with the mechanisms used for adaptive video streaming such as 3GP/MPEG-DASH. The superiority of the proposed algorithms are demonstrated, showing how the resources of a mobile network can be optimally utilized by using quantifiable QoE measurements. This approach can also find the best match between demand and supply in the process of network resource distribution.
Resumo:
This paper investigates the impact that electric vehicle uptake will have on the national electricity demand of Great Britain. Data from the National Travel Survey, and the Coventry and Birmingham Low Emissions Demonstration (CABLED) are used to model an electrical demand profile in a future scenario of significant electric vehicle market penetration. These two methods allow comparison of how conventional cars are currently used, and the resulting electrical demand with simple substitution of energy source, with data showing how electric vehicles are actually being used at present. The report finds that electric vehicles are unlikely to significantly impact electricity demand in GB. The paper also aims to determine whether electric vehicles have the potential to provide ancillary services to the grid operator, and if so, the capacity for such services that would be available. Demand side management, frequency response and Short term Operating Reserve (STOR) are the services considered. The report finds that electric cars are unlikely to provide enough moveable demand peak shedding to be worthwhile. However, it is found that controlling vehicle charging would provide sufficient power control to viably act as frequency response for dispatch by the transmission system operator. This paper concludes that electric vehicles have technical potential to aid management of the transmission network without adding a significant demand burden. © 2013 IEEE.
Resumo:
This paper describes the potential of pre-setting 11kV overhead line ratings over a time period of sufficient length to be useful to the real-time management of overhead lines. This forecast is based on short and long term freely available weather forecasts and is used to help investigate the potential for realising dynamic rating benefits on the electricity network. A comparison between the realisable benefits in ratings using this forecast data, over the period of a year has been undertaken.
Resumo:
Az elektronikus hírközlő hálózat rohamszerű fejlesztésének igénye az elektronikus szolgáltatások széles körű elterjedésével az állami döntéshozókat is fejlesztéspolitikai koncepciók kidolgozására és azok végrehajtására ösztönzi. Az (információs) társadalom fejlődése és az ennek alapjául szolgáló infokommunikációs szolgáltatások használata alapvetően függ a szélessávú infrastruktúra fejlesztésétől, az elektronikus hírközlő hálózat elérésének lehetőségétől. Az állami szerepvállalási hajlandóság 2011-től kezdődően jelentősen megnőtt az elektronikus hírközlési területen. Az MVM NET Zrt. megalapítása, a NISZ Zrt. átszervezése, a GOP 3.1.2-es pályázat és a 4. mobilszolgáltató létrehozásának terve mind mutatják a kormányzat erőteljes szándékát a terület fejlesztésére. A tanulmányban bemutatásra kerül, hogy az állam milyen beavatkozási eszközökkel rendelkezik az elektronikus hírközlő hálózat fejlesztésének ösztönzésére. A szerző ezt követően a négy, jelentős állami beavatkozás elemzését végzi el annak vizsgálatára, hogy megfelelő alapozottsággal született-e döntés az állami szerepvállalásról. _____ With the widespread use of the Internet, the need for the rapid development of the digital communication networks has prompted government policy makers also to conceptualize and implement development policy. The advancement of the (information) society and the use of information communication technology as a prerequisite of it are fundamentally determined by the development of broadband infrastructure and whether broadband access to the digital telecommunication network is available. The propensity of the government to play a bigger role in the field of electronical communication has increased significantly from 2011. The setup of MVM NET Zrt. / Hungarian Electricity NET Ltd./, the realignment of NISZ Zrt. / National Info communication Services Company Limited by Shares - NISZ Ltd./, the GOP 3.1.2. tender and the plan to enable a new, i.e. the fourth mobile network operator to enter the market all indicate the robust intention of the government to develop this field. The study shows the tools of government intervention for the incentive of the development of the electronical communication network. Then the author analyses the four main government interventions to examine whether the decision on the role of the state was adequately well-founded.
Resumo:
Az Európai Unión belül az elmúlt időszakban megerősödött a vita arról, vajon a Közösség versenyképességének javításához milyen módon és mértékben járulhat hozzá az ipari és lakossági fogyasztók számára kedvező áron elérhető villamos energia. Az uniós testületek elsődlegesen a verseny feltételeinek további javításában látják a versenyképesség javításának fő eszközét, ám egyesek az aktívabb központi szabályozás mellett érvelnek. A jelenleg alkalmazott európai szabályozási gyakorlat áttekintése, a szabályozási modellek és a piaci árak alakulásának vizsgálata hozzásegíthet, hogy következtetéseket vonjunk le a tagállami gyakorlatok tekintetében, vajon sikeresebb-e a központi ármegállapításon alapuló szabályozói mechanizmus, mint a liberalizált piacmodell. ______ There is a strengthening debate within the European Union in recent years about the impact of the affordable industrial and household electricity prices on the general competitiveness of European economies. While the European Institutions argues for the further liberalization of the energy retail sector, there are others who believe in centralization and price control to achieve lower energy prices. Current paper reviews the regulatory models of the European countries and examines the connection between the regulatory regime and consumer price trends. The analysis can help to answer, whether the bureaucratic central regulation or the liberalized market model seems more successful in supporting the competitiveness goals. Although the current regulatory practice is heterogeneous within the EU member states, there is a clear trend to decrease the role of regulated tariffs in the end-user prices. Our study did not find a general causal relationship between the regulatory regime and the level of consumer electricity prices in a country concerned. However, the quantitative analysis of the industrial and household energy prices by various segments detected significant differences between the regulated and free-market countries. The first group of member states tends to decrease the prices in the low-consuming household segments through cross-financing technics, including increased network tariffs and/or taxes for the high-consuming segments and for industrial consumers. One of the major challenges of the regulatory authorities is to find the proper way of sharing these burdens proportionally with minimizing the market-distorting effects of the cross-subsidization between the different stakeholder groups.
Resumo:
Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.
Resumo:
Current technology permits connecting local networks via high-bandwidth telephone lines. Central coordinator nodes may use Intelligent Networks to manage data flow over dialed data lines, e.g. ISDN, and to establish connections between LANs. This dissertation focuses on cost minimization and on establishing operational policies for query distribution over heterogeneous, geographically distributed databases. Based on our study of query distribution strategies, public network tariff policies, and database interface standards we propose methods for communication cost estimation, strategies for the reduction of bandwidth allocation, and guidelines for central to node communication protocols. Our conclusion is that dialed data lines offer a cost effective alternative for the implementation of distributed database query systems, and that existing commercial software may be adapted to support query processing in heterogeneous distributed database systems. ^
Resumo:
An important issue of resource distribution is the fairness of the distribution. For example, computer network management wishes to distribute network resource fairly to its users. To describe the fairness of the resource distribution, a quantitative fairness score function was proposed in 1984 by Jain et al. The purpose of this paper is to propose a modified network sharing fairness function so that the users can be treated differently according to their priority levels. The mathematical properties are discussed. The proposed fairness score function keeps all the nice properties of and provides better performance when the network users have different priority levels.
Resumo:
This research analyzed the spatial relationship between a mega-scale fracture network and the occurrence of vegetation in an arid region. High-resolution aerial photographs of Arches National Park, Utah were used for digital image processing. Four sets of large-scale joints were digitized from the rectified color photograph in order to characterize the geospatial properties of the fracture network with the aid of a Geographic Information System. An unsupervised landcover classification was carried out to identify the spatial distribution of vegetation on the fractured outcrop. Results of this study confirm that the WNW-ESE alignment of vegetation is dominantly controlled by the spatial distribution of the systematic joint set, which in turn parallels the regional fold axis. This research provides insight into the spatial heterogeneity inherent to fracture networks, as well as the effects of jointing on the distribution of surface vegetation in desert environments.
Resumo:
Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.
Resumo:
In the framework of the global energy balance, the radiative energy exchanges between Sun, Earth and space are now accurately quantified from new satellite missions. Much less is known about the magnitude of the energy flows within the climate system and at the Earth surface, which cannot be directly measured by satellites. In addition to satellite observations, here we make extensive use of the growing number of surface observations to constrain the global energy balance not only from space, but also from the surface. We combine these observations with the latest modeling efforts performed for the 5th IPCC assessment report to infer best estimates for the global mean surface radiative components. Our analyses favor global mean downward surface solar and thermal radiation values near 185 and 342 Wm**-2, respectively, which are most compatible with surface observations. Combined with an estimated surface absorbed solar radiation and thermal emission of 161 Wm**-2 and 397 Wm**-2, respectively, this leaves 106 Wm**-2 of surface net radiation available for distribution amongst the non-radiative surface energy balance components. The climate models overestimate the downward solar and underestimate the downward thermal radiation, thereby simulating nevertheless an adequate global mean surface net radiation by error compensation. This also suggests that, globally, the simulated surface sensible and latent heat fluxes, around 20 and 85 Wm**-2 on average, state realistic values. The findings of this study are compiled into a new global energy balance diagram, which may be able to reconcile currently disputed inconsistencies between energy and water cycle estimates.
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.