947 resultados para Electric networks - Planning
Resumo:
The development of an Artificial Neural Network model of UK domestic appliance energy consumption is presented. The model uses diary-style appliance use data and a survey questionnaire collected from 51 households during the summer of 2010. It also incorporates measured energy data and is sensitive to socioeconomic, physical dwelling and temperature variables. A prototype model is constructed in MATLAB using a two layer feed forward network with backpropagation training and has a12:10:24architecture.Model outputs include appliance load profiles which can be applied to the fields of energy planning (micro renewables and smart grids), building simulation tools and energy policy.
Resumo:
This paper assesses the way in which an actor network presiding over the management of the River Wye has stabilized through accepting a particular view on the issue of navigation. The paper provides an account of how the network was challenged by a dissonant actor who, through reviving an old company, developed a counter network. It is argued that network stabilization is a form of consensus-building and it is contended that the way in which an issue is defined is crucial in terms of the successful enrolment of actors. The paper illustrates some of the conflicts and complexities encountered in resource planning, suggesting that research of this nature should trace actors back through time as well as through space if dynamics between actors involved in rural planning and management are to be effectively understood.
Resumo:
In the last two decades substantial advances have been made in the understanding of the scientific basis of urban climates. These are reviewed here with attention to sustainability of cities, applications that use climate information, and scientific understanding in relation to measurements and modelling. Consideration is given from street (micro) scale to neighbourhood (local) to city and region (meso) scale. Those areas where improvements are needed in the next decade to ensure more sustainable cities are identified. High-priority recommendations are made in the following six strategic areas: observations, data, understanding, modelling, tools and education. These include the need for more operational urban measurement stations and networks; for an international data archive to aid translation of research findings into design tools, along with guidelines for different climate zones and land uses; to develop methods to analyse atmospheric data measured above complex urban surfaces; to improve short-range, high-resolution numerical prediction of weather, air quality and chemical dispersion through improved modelling of the biogeophysical features of the urban land surface; to improve education about urban meteorology; and to encourage communication across scientific disciplines at a range of spatial and temporal scales.
Resumo:
Karen Aplin and Giles Harrison examine international records of the 1859 Carrington flare and consider what they mean for our understanding of space weather today. Space weather is increasingly recognized as a hazard to modern societies, and one way to assess the extent of its possible impact is through analysis of historic space weather events. One such event was the massive solar storm of late August and early September 1859. This is now widely known as the “Carrington flare” or “Carrington event” after the visual solar emissions on 1 September first reported by the Victorian astronomer Richard Carrington from his observatory in Redhill, Surrey (Carrington 1859). The related aurorae and subsequent effects on telegraph networks are well documented (e.g. Clark 2007, Boteler 2006), but use of modern techniques, such as analysis of nitrates produced by solar protons in ice cores to retrospectively assess the nature of the solar flare, has proved problematic (Wolff et al. 2012). This means that there is still very little quantitative information about the flare beyond magnetic observations (e.g. Viljanen et al. 2014).
Resumo:
This article examines the network relationships of a set of large retail multinational enterprises (MNEs). We analyze under what conditions a flagship-network strategy (characterized by a network of five partners – the MNE, key suppliers, key partners, selected competitors and key organisations in the non-business infrastructure) explains the internationalisation of three retailers whose geographic scope, sectoral conditions and competitive strategies differ substantially. We explore why and when retailers will adopt a flagship strategy. The three firms are two U.K.-based multinational retailers (Tesco and The Body Shop) and a French-based global retailer (Moët Hennessy Louis Vuitton). We find evidence of strong network relationships for all three retailers, although each embraces network strategies for different reasons. Their flagship relationships depend on each retailer's strategic use of firm-specific-advantages (FSAs) and country-specific advantages (CSAs). We find that a flagship strategy can succeed in overcoming internal and/or environmental constraints to cross-border resource transfers, which are barriers to foreign direct investment (FDI). We provide recommendations on why and when to use a flagship-based strategy and which type of network partners to prioritize in order to succeed internationally.
Resumo:
The Chiado’s fire that affected the city centre of Lisbon (Portugal) occurred on 25th August 1988 and had a significant human and environmental impact. This fire was considered the most significant hazard to have occurred in Lisbon city centre after the major earthquake of 1755. A clear signature of this fire is found in the atmospheric electric field data recorded at Portela meteorological station about 8 km NE from the site where the fire started at Chiado. Measurements were made using a Benndorf electrograph with a probe at 1 m height. The atmospheric electric field reached 510 V/m when the wind direction was coming from SW to NE, favourable to the transport of the smoke plume from Chiado to Portela. Such observations agree with predictions using Hysplit air mass trajectory modelling and have been used to estimate the smoke concentration to be ~0.4 mg/m3. It is demonstrated that atmospheric electric field measurements were therefore extremely sensitive to Chiado’s fire. This result is of particular current interest in using networks of atmospheric electric field sensors to complement existing optical and meteorological observations for fire monitoring.
Resumo:
Network diagnosis in Wireless Sensor Networks (WSNs) is a difficult task due to their improvisational nature, invisibility of internal running status, and particularly since the network structure can frequently change due to link failure. To solve this problem, we propose a Mobile Sink (MS) based distributed fault diagnosis algorithm for WSNs. An MS, or mobile fault detector is usually a mobile robot or vehicle equipped with a wireless transceiver that performs the task of a mobile base station while also diagnosing the hardware and software status of deployed network sensors. Our MS mobile fault detector moves through the network area polling each static sensor node to diagnose the hardware and software status of nearby sensor nodes using only single hop communication. Therefore, the fault detection accuracy and functionality of the network is significantly increased. In order to maintain an excellent Quality of Service (QoS), we employ an optimal fault diagnosis tour planning algorithm. In addition to saving energy and time, the tour planning algorithm excludes faulty sensor nodes from the next diagnosis tour. We demonstrate the effectiveness of the proposed algorithms through simulation and real life experimental results.
Resumo:
This thesis work concerns about the Performance evolution of peer to peer networks, where we used different distribution technique’s of peer distribution like Weibull, Lognormal and Pareto distribution process. Then we used a network simulator to evaluate the performance of these three distribution techniques.During the last decade the Internet has expanded into a world-wide network connecting millions of hosts and users and providing services for everyone. Many emerging applications are bandwidth-intensive in their nature; the size of downloaded files including music and videos can be huge, from ten megabits to many gigabits. The efficient use of network resources is thus crucial for the survivability of the Internet. Traffic engineering (TE) covers a range of mechanisms for optimizing operational networks from the traffic perspective. The time scale in traffic engineering varies from the short-term network control to network planning over a longer time period.Here in this thesis work we considered the peer distribution technique in-order to minimise the peer arrival and service process with three different techniques, where we calculated the congestion parameters like blocking time for each peer before entering into the service process, waiting time for a peers while the other peer has been served in the service block and the delay time for each peer. Then calculated the average of each process and graphs have been plotted using Matlab to analyse the results
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
Este trabalho apresenta as principais aplicações de técnicas de gestão de riscos ao estudo de interrupções em cadeias de fornecimento, tendo como motivação o caso do fornecimento de energia elétrica, assunto de extrema relevância para o Brasil. Neste sentido, o cálculo do “custo do déficit” ou perda de produção dada uma falha no fornecimento de energia elétrica (parâmetro utilizado em todo o planejamento do setor elétrico brasileiro), foi escolhido como fator relevante a ser analisado. As principais metodologias existentes para a apuração desse parâmetro são apresentadas fazendo uma comparação com aquela atualmente em uso no Brasil. Adicionalmente, apresentamos uma proposta de implementação para as metodologias alternativas, utilizadas no exterior, e baseadas no conceito de VOLL (“Value of Lost Load”) - medida do valor da escassez de energia para empresas ou consumidores individuais e fundamental para o desenho de programas de gerenciamento de demanda.
Resumo:
The accurate identification of features of dynamical grounding systems are extremely important to define the operational safety and proper functioning of electric power systems. Several experimental tests and theoretical investigations have been carried out to obtain characteristics and parameters associated with the technique of grounding. The grounding system involves a lot of non-linear parameters. This paper describes a novel approach for mapping characteristics of dynamical grounding systems using artificial neural networks. The network acts as identifier of structural features of the grounding processes. So that output parameters can be estimated and generalized from an input parameter set. The results obtained by the network are compared with other approaches also used to model grounding systems.
Resumo:
This work presents a methodology to analyze transient stability (first oscillation) of electric energy systems, using a neural network based on ART architecture (adaptive resonance theory), named fuzzy ART-ARTMAP neural network for real time applications. The security margin is used as a stability analysis criterion, considering three-phase short circuit faults with a transmission line outage. The neural network operation consists of two fundamental phases: the training and the analysis. The training phase needs a great quantity of processing for the realization, while the analysis phase is effectuated almost without computation effort. This is, therefore the principal purpose to use neural networks for solving complex problems that need fast solutions, as the applications in real time. The ART neural networks have as primordial characteristics the plasticity and the stability, which are essential qualities to the training execution and to an efficient analysis. The fuzzy ART-ARTMAP neural network is proposed seeking a superior performance, in terms of precision and speed, when compared to conventional ARTMAP, and much more when compared to the neural networks that use the training by backpropagation algorithm, which is a benchmark in neural network area. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This work presents a procedure for transient stability analysis and preventive control of electric power systems, which is formulated by a multilayer feedforward neural network. The neural network training is realized by using the back-propagation algorithm with fuzzy controller and adaptation of the inclination and translation parameters of the nonlinear function. These procedures provide a faster convergence and more precise results, if compared to the traditional back-propagation algorithm. The adaptation of the training rate is effectuated by using the information of the global error and global error variation. After finishing the training, the neural network is capable of estimating the security margin and the sensitivity analysis. Considering this information, it is possible to develop a method for the realization of the security correction (preventive control) for levels considered appropriate to the system, based on generation reallocation and load shedding. An application for a multimachine power system is presented to illustrate the proposed methodology. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This work presents a neural network based on the ART architecture ( adaptive resonance theory), named fuzzy ART& ARTMAP neural network, applied to the electric load-forecasting problem. The neural networks based on the ARTarchitecture have two fundamental characteristics that are extremely important for the network performance ( stability and plasticity), which allow the implementation of continuous training. The fuzzy ART& ARTMAP neural network aims to reduce the imprecision of the forecasting results by a mechanism that separate the analog and binary data, processing them separately. Therefore, this represents a reduction on the processing time and improved quality of the results, when compared to the Back-Propagation neural network, and better to the classical forecasting techniques (ARIMA of Box and Jenkins methods). Finished the training, the fuzzy ART& ARTMAP neural network is capable to forecast electrical loads 24 h in advance. To validate the methodology, data from a Brazilian electric company is used. (C) 2004 Elsevier B.V. All rights reserved.