960 resultados para Real Electricity Markets Data
Resumo:
Demand response is assumed as an essential resource to fully achieve the smart grids operating benefits, namely in the context of competitive markets and of the increasing use of renewable-based energy sources. Some advantages of Demand Response (DR) programs and of smart grids can only be achieved through the implementation of Real Time Pricing (RTP). The integration of the expected increasing amounts of distributed energy resources, as well as new players, requires new approaches for the changing operation of power systems. The methodology proposed in this paper aims the minimization of the operation costs in a distribution network operated by a virtual power player that manages the available energy resources focusing on hour ahead re-scheduling. When facing lower wind power generation than expected from day ahead forecast, demand response is used in order to minimize the impacts of such wind availability change. In this way, consumers actively participate in regulation up and spinning reserve ancillary services through demand response programs. Real time pricing is also applied. The proposed model is especially useful when actual and day ahead wind forecast differ significantly. Its application is illustrated in this paper implementing the characteristics of a real resources conditions scenario in a 33 bus distribution network with 32 consumers and 66 distributed generators.
Resumo:
This paper consists in the characterization of medium voltage (MV) electric power consumers based on a data clustering approach. It is intended to identify typical load profiles by selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The best partition is selected using several cluster validity indices. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ behavior. The data-mining-based methodology presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partitions. To validate our approach, a case study with a real database of 1.022 MV consumers was used.
Resumo:
Artificial Intelligence has been applied to dynamic games for many years. The ultimate goal is creating responses in virtual entities that display human-like reasoning in the definition of their behaviors. However, virtual entities that can be mistaken for real persons are yet very far from being fully achieved. This paper presents an adaptive learning based methodology for the definition of players’ profiles, with the purpose of supporting decisions of virtual entities. The proposed methodology is based on reinforcement learning algorithms, which are responsible for choosing, along the time, with the gathering of experience, the most appropriate from a set of different learning approaches. These learning approaches have very distinct natures, from mathematical to artificial intelligence and data analysis methodologies, so that the methodology is prepared for very distinct situations. This way it is equipped with a variety of tools that individually can be useful for each encountered situation. The proposed methodology is tested firstly on two simpler computer versus human player games: the rock-paper-scissors game, and a penalty-shootout simulation. Finally, the methodology is applied to the definition of action profiles of electricity market players; players that compete in a dynamic game-wise environment, in which the main goal is the achievement of the highest possible profits in the market.
Resumo:
Electric power networks, namely distribution networks, have been suffering several changes during the last years due to changes in the power systems operation, towards the implementation of smart grids. Several approaches to the operation of the resources have been introduced, as the case of demand response, making use of the new capabilities of the smart grids. In the initial levels of the smart grids implementation reduced amounts of data are generated, namely consumption data. The methodology proposed in the present paper makes use of demand response consumers’ performance evaluation methods to determine the expected consumption for a given consumer. Then, potential commercial losses are identified using monthly historic consumption data. Real consumption data is used in the case study to demonstrate the application of the proposed method.
Resumo:
This paper presents the first phase of the redevelopment of the Electric Vehicle Scenario Simulator (EVeSSi) tool. A new methodology to generate traffic demand scenarios for the Simulation of Urban MObility (SUMO) tool for urban traffic simulation is described. This methodology is based on a Portugal census database to generate a synthetic population for a given area under study. A realistic case study of a Portuguese city, Vila Real, is assessed. For this area the road network was created along with a synthetic population and public transport. The traffic results were obtained and an electric buses fleet was evaluated assuming that the actual fleet would be replaced in a near future. The energy requirements to charge the electric fleet overnight were estimated in order to evaluate the impacts that it would cause in the local electricity network.
Resumo:
Em Angola, apenas cerca de 30% da população tem acesso à energia elétrica, nível que decresce para valores inferiores a 10% em zonas rurais mais remotas. Este problema é agravado pelo facto de, na maioria dos casos, as infraestruturas existentes se encontrarem danificadas ou não acompanharem o desenvolvimento da região. Em particular na capital angolana, Luanda que, sendo a menor província de Angola, é a que regista atualmente a maior densidade populacional. Com uma população de cerca de 5 milhões de habitantes, não só há frequentemente problemas relacionados com a falha do fornecimento de energia elétrica como há ainda uma percentagem considerável de municípios onde a rede elétrica ainda nem sequer chegou. O governo de Angola, no seu esforço de crescimento e aproveitamento das suas enormes potencialidades, definiu o setor energético como um dos fatores críticos para o desenvolvimento sustentável do país, tendo assumido que este é um dos eixos prioritários até 2016. Existem objetivos claros quanto à reabilitação e expansão das infraestruturas do setor elétrico, aumentando a capacidade instalada do país e criando uma rede nacional adequada, com o intuito não só de melhorar a qualidade e fiabilidade da rede já existente como de a aumentar. Este trabalho de dissertação consistiu no levantamento de dados reais relativamente à rede de distribuição de energia elétrica de Luanda, na análise e planeamento do que é mais premente fazer relativamente à sua expansão, na escolha dos locais onde é viável localizar novas subestações, na modelação adequada do problema real e na proposta de uma solução ótima para a expansão da rede existente. Depois de analisados diferentes modelos matemáticos aplicados ao problema de expansão de redes de distribuição de energia elétrica encontrados na literatura, optou-se por um modelo de programação linear inteira mista (PLIM) que se mostrou adequado. Desenvolvido o modelo do problema, o mesmo foi resolvido por recurso a software de otimização Analytic Solver e CPLEX. Como forma de validação dos resultados obtidos, foi implementada a solução de rede no simulador PowerWorld 8.0 OPF, software este que permite a simulação da operação do sistema de trânsito de potências.
Resumo:
The transition process between information and knowledge is faster and so the inputs that influence social and political practises. The dissemination of information is now determinant in terms of territorial competitiveness and both public and private sector take large benefits when the data-information- knowledge value chain repeats itself trough space and time. Mankind depends nowadays on the creation and diffusion of good and reliable information. Speed is also important and the greater the speed, the faster the opportunities for global markets. Information must be an input for knowledge and obviously for decision. So, the power of information is unquestionable. This paper focuses on concepts like information, knowledge and other, more geographical and tries to explain how territories change from real to virtual. Knowledge society appears on an evolutional context in which information dissemination is wider and technological potential overwrites traditional notions of Geography. To understand the mutations over the territories, the causes and the consequences emerges the Geography of the Knowledge Society, a new discipline inside Geography with a special concern about modern society and socio-economical developing models.
Resumo:
This paper studies the statistical distributions of worldwide earthquakes from year 1963 up to year 2012. A Cartesian grid, dividing Earth into geographic regions, is considered. Entropy and the Jensen–Shannon divergence are used to analyze and compare real-world data. Hierarchical clustering and multi-dimensional scaling techniques are adopted for data visualization. Entropy-based indices have the advantage of leading to a single parameter expressing the relationships between the seismic data. Classical and generalized (fractional) entropy and Jensen–Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand earthquake distributions.
Resumo:
Complex industrial plants exhibit multiple interactions among smaller parts and with human operators. Failure in one part can propagate across subsystem boundaries causing a serious disaster. This paper analyzes the industrial accident data series in the perspective of dynamical systems. First, we process real world data and show that the statistics of the number of fatalities reveal features that are well described by power law (PL) distributions. For early years, the data reveal double PL behavior, while, for more recent time periods, a single PL fits better into the experimental data. Second, we analyze the entropy of the data series statistics over time. Third, we use the Kullback–Leibler divergence to compare the empirical data and multidimensional scaling (MDS) techniques for data analysis and visualization. Entropy-based analysis is adopted to assess complexity, having the advantage of yielding a single parameter to express relationships between the data. The classical and the generalized (fractional) entropy and Kullback–Leibler divergence are used. The generalized measures allow a clear identification of patterns embedded in the data.
Resumo:
Twenty-four whole blood and serum samples were drawn from an eight year-old heart transplant child during a 36 months follow-up. EBV serology was positive for VCA-IgM and IgG, and negative for EBNA-IgG at the age of five years old when the child presented with signs and symptoms suggestive of acute infectious mononucleosis. After 14 months, serological parameters were: positive VCA-IgG, EBNA-IgG and negative VCA-IgM. This serological pattern has been maintained since then even during episodes suggestive of EBV reactivation. PCR amplified a specific DNA fragment from the EBV gp220 (detection limit of 100 viral copies). All twenty-four whole blood samples yielded positive results by PCR, while 12 out of 24 serum samples were positive. We aimed at analyzing whether detection of EBV-DNA in serum samples by PCR was associated with overt disease as stated by the need of antiviral treatment and hospitalization. Statistical analysis showed agreement between the two parameters evidenced by the Kappa test (value 0.750; p < 0.001). We concluded that detection of EBV-DNA in serum samples of immunosuppressed patients might be used as a laboratory marker of active EBV disease when a Real-Time PCR or another quantitative method is not available.
Resumo:
A presente dissertação apresenta o estudo de previsão do diagrama de carga de subestações da Rede Elétrica Nacional (REN) utilizando redes neuronais, com o intuito de verificar a viabilidade do método utilizado, em estudos futuros. Atualmente, a energia elétrica é um bem essencial e desempenha um papel fundamental, tanto a nível económico do país, como a nível de conforto e satisfação individual. Com o desenvolvimento do setor elétrico e o aumento dos produtores torna-se importante a realização da previsão de diagramas de carga, contribuindo para a eficiência das empresas. Esta dissertação tem como objetivo a utilização do modelo das redes neuronais artificiais (RNA) para criar uma rede capaz de realizar a previsão de diagramas de carga, com a finalidade de oferecer a possibilidade de redução de custos e gastos, e a melhoria de qualidade e fiabilidade. Ao longo do trabalho são utilizados dados da carga (em MW), obtidos da REN, da subestação da Prelada e dados como a temperatura, humidade, vento e luminosidade, entre outros. Os dados foram devidamente tratados com a ajuda do software Excel. Com o software MATLAB são realizados treinos com redes neuronais, através da ferramenta Neural Network Fitting Tool, com o objetivo de obter uma rede que forneça os melhores resultados e posteriormente utiliza-la na previsão de novos dados. No processo de previsão, utilizando dados reais das subestações da Prelada e Ermesinde referentes a Março de 2015, comprova-se que com a utilização de RNA é possível obter dados de previsão credíveis, apesar de não ser uma previsão exata. Deste modo, no que diz respeito à previsão de diagramas de carga, as RNA são um bom método a utilizar, uma vez que fornecem, à parte interessada, uma boa previsão do consumo e comportamento das cargas elétricas. Com a finalização deste estudo os resultados obtidos são no mínimo satisfatórios. Consegue-se alcançar através das RNA resultados próximos aos valores que eram esperados, embora não exatamente iguais devido à existência de uma margem de erro na aprendizagem da rede neuronal.
Resumo:
The world is increasingly in a global community. The rapid technological development of communication and information technologies allows the transmission of knowledge in real-time. In this context, it is imperative that the most developed countries are able to develop their own strategies to stimulate the industrial sector to keep up-to-date and being competitive in a dynamic and volatile global market so as to maintain its competitive capacities and by consequence, permits the maintenance of a pacific social state to meet the human and social needs of the nation. The path traced of competitiveness through technological differentiation in industrialization allows a wider and innovative field of research. Already we are facing a new phase of organization and industrial technology that begins to change the way we relate with the industry, society and the human interaction in the world of work in current standards. This Thesis, develop an analysis of Industrie 4.0 Framework, Challenges and Perspectives. Also, an analysis of German reality in facing to approach the future challenge in this theme, the competition expected to win in future global markets, points of domestic concerns felt in its industrial fabric household face this challenge and proposes recommendations for a more effective implementation of its own strategy. The methods of research consisted of a comprehensive review and strategically analysis of existing global literature on the topic, either directly or indirectly, in parallel with the analysis of questionnaires and data analysis performed by entities representing the industry at national and world global placement. The results found by this multilevel analysis, allowed concluding that this is a theme that is only in the beginning for construction the platform to engage the future Internet of Things in the industrial environment Industrie 4.0. This dissertation allows stimulate the need of achievements of more strategically and operational approach within the society itself as a whole to clarify the existing weaknesses in this area, so that the National Strategy can be implemented with effective approaches and planned actions for a direct training plan in a more efficiently path in education for the theme.
Resumo:
Recent embedded processor architectures containing multiple heterogeneous cores and non-coherent caches renewed attention to the use of Software Transactional Memory (STM) as a building block for developing parallel applications. STM promises to ease concurrent and parallel software development, but relies on the possibility of abort conflicting transactions to maintain data consistency, which in turns affects the execution time of tasks carrying transactions. Because of this fact the timing behaviour of the task set may not be predictable, thus it is crucial to limit the execution time overheads resulting from aborts. In this paper we formalise a FIFO-based algorithm to order the sequence of commits of concurrent transactions. Then, we propose and evaluate two non-preemptive and one SRP-based fully-preemptive scheduling strategies, in order to avoid transaction starvation.
Resumo:
Real-time monitoring applications may be used in a wireless sensor network (WSN) and may generate packet flows with strict quality of service requirements in terms of delay, jitter, or packet loss. When strict delays are imposed from source to destination, the packets must be delivered at the destination within an end-to-end delay (EED) hard limit in order to be considered useful. Since the WSN nodes are scarce both in processing and energy resources, it is desirable that they only transport useful data, as this contributes to enhance the overall network performance and to improve energy efficiency. In this paper, we propose a novel cross-layer admission control (CLAC) mechanism to enhance the network performance and increase energy efficiency of a WSN, by avoiding the transmission of potentially useless packets. The CLAC mechanism uses an estimation technique to preview packets EED, and decides to forward a packet only if it is expected to meet the EED deadline defined by the application, dropping it otherwise. The results obtained show that CLAC enhances the network performance by increasing the useful packet delivery ratio in high network loads and improves the energy efficiency in every network load.
Resumo:
New arguments proving that successive (repeated) measurements have a memory and actually remember each other are presented. The recognition of this peculiarity can change essentially the existing paradigm associated with conventional observation in behavior of different complex systems and lead towards the application of an intermediate model (IM). This IM can provide a very accurate fit of the measured data in terms of the Prony's decomposition. This decomposition, in turn, contains a small set of the fitting parameters relatively to the number of initial data points and allows comparing the measured data in cases where the “best fit” model based on some specific physical principles is absent. As an example, we consider two X-ray diffractometers (defined in paper as A- (“cheap”) and B- (“expensive”) that are used after their proper calibration for the measuring of the same substance (corundum a-Al2O3). The amplitude-frequency response (AFR) obtained in the frame of the Prony's decomposition can be used for comparison of the spectra recorded from (A) and (B) - X-ray diffractometers (XRDs) for calibration and other practical purposes. We prove also that the Fourier decomposition can be adapted to “ideal” experiment without memory while the Prony's decomposition corresponds to real measurement and can be fitted in the frame of the IM in this case. New statistical parameters describing the properties of experimental equipment (irrespective to their internal “filling”) are found. The suggested approach is rather general and can be used for calibration and comparison of different complex dynamical systems in practical purposes.