896 resultados para sensor grid database system
Resumo:
Although we have many electric devices at home, there are just few systems to evaluate, monitor and control them. Sometimes users go out and leave their electric devices turned on what can cause energy wasting and dangerous situations. Therefore most of the users may want to know the using states of their electrical appliances through their mobile devices in a pervasive way. In this paper, we propose an Intelligent Supervisory Control System to evaluate, monitor and control the use of electric devices in home, from outside. Because of the transferring data to evaluate, monitor and control user's location and state of home (ex. nobody at home) may be opened to attacks leading to dangerous situations. In our model we include a location privacy module and encryption module to provide security to user location and data. Intelligent Supervising Control System gives to the user the ability to manage electricity loads by means of a multi-agent system involving evaluation, monitoring, control and energy resource agents.
Resumo:
The large increase of distributed energy resources, including distributed generation, storage systems and demand response, especially in distribution networks, makes the management of the available resources a more complex and crucial process. With wind based generation gaining relevance, in terms of the generation mix, the fact that wind forecasting accuracy rapidly drops with the increase of the forecast anticipation time requires to undertake short-term and very short-term re-scheduling so the final implemented solution enables the lowest possible operation costs. This paper proposes a methodology for energy resource scheduling in smart grids, considering day ahead, hour ahead and five minutes ahead scheduling. The short-term scheduling, undertaken five minutes ahead, takes advantage of the high accuracy of the very-short term wind forecasting providing the user with more efficient scheduling solutions. The proposed method uses a Genetic Algorithm based approach for optimization that is able to cope with the hard execution time constraint of short-term scheduling. Realistic power system simulation, based on PSCAD , is used to validate the obtained solutions. The paper includes a case study with a 33 bus distribution network with high penetration of distributed energy resources implemented in PSCAD .
Resumo:
The large increase of Distributed Generation (DG) in Power Systems (PS) and specially in distribution networks makes the management of distribution generation resources an increasingly important issue. Beyond DG, other resources such as storage systems and demand response must be managed in order to obtain more efficient and “green” operation of PS. More players, such as aggregators or Virtual Power Players (VPP), that operate these kinds of resources will be appearing. This paper proposes a new methodology to solve the distribution network short term scheduling problem in the Smart Grid context. This methodology is based on a Genetic Algorithms (GA) approach for energy resource scheduling optimization and on PSCAD software to obtain realistic results for power system simulation. The paper includes a case study with 99 distributed generators, 208 loads and 27 storage units. The GA results for the determination of the economic dispatch considering the generation forecast, storage management and load curtailment in each period (one hour) are compared with the ones obtained with a Mixed Integer Non-Linear Programming (MINLP) approach.
Resumo:
In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.
Resumo:
Currently, power systems (PS) already accommodate a substantial penetration of distributed generation (DG) and operate in competitive environments. In the future, as the result of the liberalisation and political regulations, PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage and provide market agents to ensure a flexible and secure operation. This cannot be done with the traditional PS operational tools used today like the quite restricted information systems Supervisory Control and Data Acquisition (SCADA) [1]. The trend to use the local generation in the active operation of the power system requires new solutions for data management system. The relevant standards have been developed separately in the last few years so there is a need to unify them in order to receive a common and interoperable solution. For the distribution operation the CIM models described in the IEC 61968/70 are especially relevant. In Europe dispersed and renewable energy resources (D&RER) are mostly operated without remote control mechanisms and feed the maximal amount of available power into the grid. To improve the network operation performance the idea of virtual power plants (VPP) will become a reality. In the future power generation of D&RER will be scheduled with a high accuracy. In order to realize VPP decentralized energy management, communication facilities are needed that have standardized interfaces and protocols. IEC 61850 is suitable to serve as a general standard for all communication tasks in power systems [2]. The paper deals with international activities and experiences in the implementation of a new data management and communication concept in the distribution system. The difficulties in the coordination of the inconsistent developed in parallel communication and data management standards - are first addressed in the paper. The upcoming unification work taking into account the growing role of D&RER in the PS is shown. It is possible to overcome the lag in current practical experiences using new tools for creating and maintenance the CIM data and simulation of the IEC 61850 protocol – the prototype of which is presented in the paper –. The origin and the accuracy of the data requirements depend on the data use (e.g. operation or planning) so some remarks concerning the definition of the digital interface incorporated in the merging unit idea from the power utility point of view are presented in the paper too. To summarize some required future work has been identified.
Resumo:
In the energy management of the isolated operation of small power system, the economic scheduling of the generation units is a crucial problem. Applying right timing can maximize the performance of the supply. The optimal operation of a wind turbine, a solar unit, a fuel cell and a storage battery is searched by a mixed-integer linear programming implemented in General Algebraic Modeling Systems (GAMS). A Virtual Power Producer (VPP) can optimal operate the generation units, assured the good functioning of equipment, including the maintenance, operation cost and the generation measurement and control. A central control at system allows a VPP to manage the optimal generation and their load control. The application of methodology to a real case study in Budapest Tech, demonstrates the effectiveness of this method to solve the optimal isolated dispatch of the DC micro-grid renewable energy park. The problem has been converged in 0.09 s and 30 iterations.
Resumo:
This paper presents an integrated system that helps both retail companies and electricity consumers on the definition of the best retail contracts and tariffs. This integrated system is composed by a Decision Support System (DSS) based on a Consumer Characterization Framework (CCF). The CCF is based on data mining techniques, applied to obtain useful knowledge about electricity consumers from large amounts of consumption data. This knowledge is acquired following an innovative and systematic approach able to identify different consumers’ classes, represented by a load profile, and its characterization using decision trees. The framework generates inputs to use in the knowledge base and in the database of the DSS. The rule sets derived from the decision trees are integrated in the knowledge base of the DSS. The load profiles together with the information about contracts and electricity prices form the database of the DSS. This DSS is able to perform the classification of different consumers, present its load profile and test different electricity tariffs and contracts. The final outputs of the DSS are a comparative economic analysis between different contracts and advice about the most economic contract to each consumer class. The presentation of the DSS is completed with an application example using a real data base of consumers from the Portuguese distribution company.
Resumo:
Electric vehicles (EV) offer a great potential to address the integration of renewable energy sources (RES) in the power grid, and thus reduce the dependence on oil as well as the greenhouse gases (GHG) emissions. The high share of wind energy in the Portuguese energy mix expected for 2020 can led to eventual curtailment, especially during the winter when high levels of hydro generation occur. In this paper a methodology based on a unit commitment and economic dispatch is implemented, and a hydro-thermal dispatch is performed in order to evaluate the impact of the EVs integration into the grid. Results show that the considered 10 % penetration of EVs in the Portuguese fleet would increase load in 3 % and would not integrate a significant amount of wind energy because curtailment is already reduced in the absence of EVs. According to the results, the EV is charged mostly with thermal generation and the associated emissions are much higher than if they were calculated based on the generation mix.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Thesis submitted in the fulfilment of the requirements for the Degree of Master in Electronic and Telecomunications Engineering
Resumo:
O desenvolvimento de sistemas de localização pedestre com recurso a técnicas de dead reckoning tem mostrado ser uma área em expansão no mundo académico e não só. Existem algumas soluções criadas, no entanto, nem todas as soluções serão facilmente implementadas no mercado, quer seja pelo hardware caro, ou pelo sistema em si, que é desenvolvido tendo em conta um cenário em particular. INPERLYS é um sistema que visa apresentar uma solução de localização pedestre, independentemente do cenário, utilizando recursos que poderão ser facilmente usados. Trata-se de um sistema que utiliza uma técnica de dead reckonig para dar a localização do utilizador. Em cenários outdoor, um receptor GPS fornece a posição do utilizador, fornecendo uma posição absoluta ao sistema. Quando não é possível utilizar o GPS, recorre-se a um sensor MEMS e a uma bússola para se obter posições relativas à última posição válida do GPS. Para interligar todos os sensores foi utilizado o protocolo de comunicações sem fios ZigBee™. A escolha recaiu neste protocolo devido a factores como os seus baixos consumos e o seu baixo custo. Assim o sistema torna-se de uso fácil e confortável para o utilizador, ao contrário de sistemas similares desenvolvidos, que utilizam cabos para interligarem os diferentes componentes do sistema. O sensor MEMS do tipo acelerómetro tem a função de ler a aceleração horizontal, ao nível do pé. Esta aceleração será usada por um algoritmo de reconhecimento do padrão das acelerações para se detectar os passos dados. Após a detecção do passo, a aceleração máxima registada nesse passo é fornecida ao coordenador, para se obter o deslocamento efectuado. Foram efectuados alguns testes para se perceber a eficiência do INPERLYS. Os testes decorreram num percurso plano, efectuados a uma velocidade normal e com passadas normais. Verificou-se que, neste momento, o desempenho do sistema poderá ser melhorado, quer seja a nível de gestão das comunicações, quer a nível do reconhecimento do padrão da aceleração horizontal, essencial para se detectar os passos. No entanto o sistema é capaz de fornecer a posição através do GPS, quando é possível a sua utilização, e é capaz de fornecer a orientação do movimento.
Resumo:
A new fluorescent sensor for nitric oxide (NO) is presented that is based on its reaction with a non fluorescent substance, reduced fluoresceinamine, producing the highly fluorescent fluoresceinamine. Using a portable homemade stabilized light source consisting of 450 nm LED and fiber optics to guide the light, the sensor responds linearly within seconds in the NO concentration range between about 10–750 µM with a limit of detection (LOD) of about 1 µM. The system generated precise intensity readings, with a relative standard deviation of less than 1%. The suitability of the sensor was assessed by monitoring the NO generated by either the nitrous acid decomposition reaction or from a NO-releasing compound. Using relatively high incubation times, the sensor also responds quantitatively to hydrogen peroxide and potassium superoxide, however, using transient signal measurements results in no interfering species.
Resumo:
In this paper, a biosensor based on a glassy carbon electrode (GCE) was used for the evaluation of the total antioxidant capacity (TAC) of flavours and flavoured waters. This biosensor was constructed by immobilising purine bases, guanine and adenine, on a GCE. Square wave voltammetry (SWV) was selected for the development of this methodology. Damage caused by the reactive oxygen species (ROS), superoxide radical (O2·−), generated by the xanthine/xanthine oxidase (XOD) system on the DNA-biosensor was evaluated. DNA-biosensor encountered with oxidative lesion when it was in contact with the O2·−. There was less oxidative damage when reactive antioxidants were added. The antioxidants used in this work were ascorbic acid, gallic acid, caffeic acid, coumaric acid and resveratrol. These antioxidants are capable of scavenging the superoxide radical and therefore protect the purine bases immobilized on the GCE surface. The results demonstrated that the DNA-based biosensor is suitable for the rapid assess of TAC in beverages.
Resumo:
In this study, a method for the electrochemical quantification of the total antioxidant capacity (TAC) in beverages was developed. The method is based on the oxidative damage to the purine bases, adenine or guanine, that are immobilized on a glassy carbon electrode (GCE) surface. The oxidative lesions on the DNA bases were promoted by the sulfate radical generated by the persulfate/iron(II) system. The presence of antioxidants on the reactive system promoted the protection of the DNA bases immobilized on the GCE by scavenging the sulfate radical. Square-wave voltammetry (SWV) was the electrochemical technique used to perform this study. The efficiencies of five antioxidants (ascorbic acid, gallic acid, caffeic acid, coumaric acid and resveratrol) in scavenging the sulfate radical and, therefore, their ability to protect the purine bases immobilized on the GCE were investigated. These results demonstrated that the purine-based biosensor is suitable for the rapid assessment of the TAC in flavors and flavored water.
Resumo:
An optical fiber sensor for Hg(II) in aqueous solution based on sol–gel immobilized carbon dots nanoparticles functionalized with PEG200 and N-acetyl-l-cysteine is described. This sol–gel method generated a thin (about 750 nm), homogenous and smooth (roughness of 2.7±0.7 a˚ ) filmthat immobilizes the carbon dots and allows reversible sensing of Hg(II) in aqueous solution. A fast (less than 10 s), reversible and stable (the fluorescence intensity measurements oscillate less than 1% after several calibration cycles) sensor system was obtained. The sensor allow the detection of submicron molar concentrations of Hg(II) in aqueous solution. The fluorescence intensity of the immobilized carbon dots is quenched by the presence of Hg(II) with a Stern-Volmer constant (pH = 6.8) of 5.3×105M−1.