937 resultados para Distributed data
Resumo:
Presently power system operation produces huge volumes of data that is still treated in a very limited way. Knowledge discovery and machine learning can make use of these data resulting in relevant knowledge with very positive impact. In the context of competitive electricity markets these data is of even higher value making clear the trend to make data mining techniques application in power systems more relevant. This paper presents two cases based on real data, showing the importance of the use of data mining for supporting demand response and for supporting player strategic behavior.
Resumo:
A methodology based on data mining techniques to support the analysis of zonal prices in real transmission networks is proposed in this paper. The mentioned methodology uses clustering algorithms to group the buses in typical classes that include a set of buses with similar LMP values. Two different clustering algorithms have been used to determine the LMP clusters: the two-step and K-means algorithms. In order to evaluate the quality of the partition as well as the best performance algorithm adequacy measurements indices are used. The paper includes a case study using a Locational Marginal Prices (LMP) data base from the California ISO (CAISO) in order to identify zonal prices.
Resumo:
Currently, power systems (PS) already accommodate a substantial penetration of distributed generation (DG) and operate in competitive environments. In the future, as the result of the liberalisation and political regulations, PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage and provide market agents to ensure a flexible and secure operation. This cannot be done with the traditional PS operational tools used today like the quite restricted information systems Supervisory Control and Data Acquisition (SCADA) [1]. The trend to use the local generation in the active operation of the power system requires new solutions for data management system. The relevant standards have been developed separately in the last few years so there is a need to unify them in order to receive a common and interoperable solution. For the distribution operation the CIM models described in the IEC 61968/70 are especially relevant. In Europe dispersed and renewable energy resources (D&RER) are mostly operated without remote control mechanisms and feed the maximal amount of available power into the grid. To improve the network operation performance the idea of virtual power plants (VPP) will become a reality. In the future power generation of D&RER will be scheduled with a high accuracy. In order to realize VPP decentralized energy management, communication facilities are needed that have standardized interfaces and protocols. IEC 61850 is suitable to serve as a general standard for all communication tasks in power systems [2]. The paper deals with international activities and experiences in the implementation of a new data management and communication concept in the distribution system. The difficulties in the coordination of the inconsistent developed in parallel communication and data management standards - are first addressed in the paper. The upcoming unification work taking into account the growing role of D&RER in the PS is shown. It is possible to overcome the lag in current practical experiences using new tools for creating and maintenance the CIM data and simulation of the IEC 61850 protocol – the prototype of which is presented in the paper –. The origin and the accuracy of the data requirements depend on the data use (e.g. operation or planning) so some remarks concerning the definition of the digital interface incorporated in the merging unit idea from the power utility point of view are presented in the paper too. To summarize some required future work has been identified.
Resumo:
A supervisory control and data acquisition (SCADA) system is an integrated platform that incorporates several components and it has been applied in the field of power systems and several engineering applications to monitor, operate and control a lot of processes. In the future electrical networks, SCADA systems are essential for an intelligent management of resources like distributed generation and demand response, implemented in the smart grid context. This paper presents a SCADA system for a typical residential house. The application is implemented on MOVICON™11 software. The main objective is to manage the residential consumption, reducing or curtailing loads to keep the power consumption in or below a specified setpoint, imposed by the costumer and the generation availability.
Resumo:
In the energy management of a small power system, the scheduling of the generation units is a crucial problem for which adequate methodologies can maximize the performance of the energy supply. This paper proposes an innovative methodology for distributed energy resources management. The optimal operation of distributed generation, demand response and storage resources is formulated as a mixed-integer linear programming model (MILP) and solved by a deterministic optimization technique CPLEX-based implemented in General Algebraic Modeling Systems (GAMS). The paper deals with a vision for the grids of the future, focusing on conceptual and operational aspects of electrical grids characterized by an intensive penetration of DG, in the scope of competitive environments and using artificial intelligence methodologies to attain the envisaged goals. These concepts are implemented in a computational framework which includes both grid and market simulation.
Resumo:
Demand response can play a very relevant role in future power systems in which distributed generation can help to assure service continuity in some fault situations. This paper deals with the demand response concept and discusses its use in the context of competitive electricity markets and intensive use of distributed generation. The paper presents DemSi, a demand response simulator that allows studying demand response actions and schemes using a realistic network simulation based on PSCAD. Demand response opportunities are used in an optimized way considering flexible contracts between consumers and suppliers. A case study evidences the advantages of using flexible contracts and optimizing the available generation when there is a lack of supply.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
Objectives : The purpose of this article is to find out differences between surveys using paper and online questionnaires. The author has deep knowledge in the case of questions concerning opinions in the development of survey based research, e.g. the limits of postal and online questionnaires. Methods : In the physician studies carried out in 1995 (doctors graduated in 1982-1991), 2000 (doctors graduated in 1982-1996), 2005 (doctors graduated in 1982-2001), 2011 (doctors graduated in 1977-2006) and 457 family doctors in 2000, were used paper and online questionnaires. The response rates were 64%, 68%, 64%, 49% and 73%, respectively. Results : The results of the physician studies showed that there were differences between methods. These differences were connected with using paper-based questionnaire and online questionnaire and response rate. The online-based survey gave a lower response rate than the postal survey. The major advantages of online survey were short response time; very low financial resource needs and data were directly loaded in the data analysis software, thus saved time and resources associated with the data entry process. Conclusions : The current article helps researchers with planning the study design and choosing of the right data collection method.
Resumo:
This paper presents the SmartClean tool. The purpose of this tool is to detect and correct the data quality problems (DQPs). Compared with existing tools, SmartClean has the following main advantage: the user does not need to specify the execution sequence of the data cleaning operations. For that, an execution sequence was developed. The problems are manipulated (i.e., detected and corrected) following that sequence. The sequence also supports the incremental execution of the operations. In this paper, the underlying architecture of the tool is presented and its components are described in detail. The tool's validity and, consequently, of the architecture is demonstrated through the presentation of a case study. Although SmartClean has cleaning capabilities in all other levels, in this paper are only described those related with the attribute value level.
Resumo:
The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.
Resumo:
A estimativa da idade gestacional em restos cadavéricos de fetos é importante em contextos forenses. Para esse efeito, os especialistas forenses recorrem à avaliação do padrão de calcificação dentária e/ou ao estudo do esqueleto. Neste último, o comprimento das diáfises de ossos longos é um dos métodos mais utilizados, sendo utilizadas tabelas e equações de regressão de obras pouco actuais ou baseadas em dados ecográficos, cujas medições diferem das efectuadas directamente no osso. Este trabalho tem como objectivo principal a construção de tabelas e equações de regressão para a população Portuguesa, com base na medição das diáfises de fémur, tíbia e úmero, utilizando radiografias post-mortem, que não diferem muito das medições em osso. Pretende-se também determinar qual dos três ossos é mais credível e se existem diferenças significativas entre fetos de género feminino e de género masculino.
Resumo:
OBJECTIVE: To estimate the direct costs of schizophrenia for the public sector. METHODS: A study was carried out in the state of São Paulo, Brazil, during 1998. Data from the medical literature and governmental research bodies were gathered for estimating the total number of schizophrenia patients covered by the Brazilian Unified Health System. A decision tree was built based on an estimated distribution of patients under different types of psychiatric care. Medical charts from public hospitals and outpatient services were used to estimate the resources used over a one-year period. Direct costs were calculated by attributing monetary values for each resource used. RESULTS: Of all patients, 81.5% were covered by the public sector and distributed as follows: 6.0% in psychiatric hospital admissions, 23.0% in outpatient care, and 71.0% without regular treatment. The total direct cost of schizophrenia was US$191,781,327 (2.2% of the total health care expenditure in the state). Of this total, 11.0% was spent on outpatient care and 79.2% went for inpatient care. CONCLUSIONS: Most schizophrenia patients in the state of São Paulo receive no regular treatment. The study findings point out to the importance of investing in research aimed at improving the resource allocation for the treatment of mental disorders in Brazil.
Resumo:
OBJECTIVE: Myocardial infarction is an acute and severe cardiovascular disease that generally leads to patient admissions to intensive care units and few cases are initially admitted to infirmaries. The objective of the study was to assess whether estimates of air pollution effects on myocardial infarction morbidity are modified by the source of health information. METHODS: The study was carried out in hospitals of the Brazilian Health System in the city of São Paulo, Southern Brazil. A time series study (1998-1999) was performed using two outcomes: infarction admissions to infirmaries and to intensive care units, both for people older than 64 years of age. Generalized linear models controlling for seasonality (long and short-term trends) and weather were used. The eight-day cumulative effects of air pollutants were assessed using third degree polynomial distributed lag models. RESULTS: Almost 70% of daily hospital admissions due to myocardial infarction were to infirmaries. Despite that, the effects of air pollutants on infarction were higher for intensive care units admissions. All pollutants were positively associated with the study outcomes but SO2 presented the strongest statistically significant association. An interquartile range increase on SO2 concentration was associated with increases of 13% (95% CI: 6-19) and 8% (95% CI: 2-13) of intensive care units and infirmary infarction admissions, respectively. CONCLUSIONS: It may be assumed there is a misclassification of myocardial infarction admissions to infirmaries leading to overestimation. Also, despite the absolute number of events, admissions to intensive care units data provides a more adequate estimate of the magnitude of air pollution effects on infarction admissions.
Resumo:
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.
Resumo:
Copyright © 2013 Springer Netherlands.