971 resultados para Data filtering


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presently power system operation produces huge volumes of data that is still treated in a very limited way. Knowledge discovery and machine learning can make use of these data resulting in relevant knowledge with very positive impact. In the context of competitive electricity markets these data is of even higher value making clear the trend to make data mining techniques application in power systems more relevant. This paper presents two cases based on real data, showing the importance of the use of data mining for supporting demand response and for supporting player strategic behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A methodology based on data mining techniques to support the analysis of zonal prices in real transmission networks is proposed in this paper. The mentioned methodology uses clustering algorithms to group the buses in typical classes that include a set of buses with similar LMP values. Two different clustering algorithms have been used to determine the LMP clusters: the two-step and K-means algorithms. In order to evaluate the quality of the partition as well as the best performance algorithm adequacy measurements indices are used. The paper includes a case study using a Locational Marginal Prices (LMP) data base from the California ISO (CAISO) in order to identify zonal prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives : The purpose of this article is to find out differences between surveys using paper and online questionnaires. The author has deep knowledge in the case of questions concerning opinions in the development of survey based research, e.g. the limits of postal and online questionnaires. Methods : In the physician studies carried out in 1995 (doctors graduated in 1982-1991), 2000 (doctors graduated in 1982-1996), 2005 (doctors graduated in 1982-2001), 2011 (doctors graduated in 1977-2006) and 457 family doctors in 2000, were used paper and online questionnaires. The response rates were 64%, 68%, 64%, 49% and 73%, respectively. Results : The results of the physician studies showed that there were differences between methods. These differences were connected with using paper-based questionnaire and online questionnaire and response rate. The online-based survey gave a lower response rate than the postal survey. The major advantages of online survey were short response time; very low financial resource needs and data were directly loaded in the data analysis software, thus saved time and resources associated with the data entry process. Conclusions : The current article helps researchers with planning the study design and choosing of the right data collection method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the SmartClean tool. The purpose of this tool is to detect and correct the data quality problems (DQPs). Compared with existing tools, SmartClean has the following main advantage: the user does not need to specify the execution sequence of the data cleaning operations. For that, an execution sequence was developed. The problems are manipulated (i.e., detected and corrected) following that sequence. The sequence also supports the incremental execution of the operations. In this paper, the underlying architecture of the tool is presented and its components are described in detail. The tool's validity and, consequently, of the architecture is demonstrated through the presentation of a case study. Although SmartClean has cleaning capabilities in all other levels, in this paper are only described those related with the attribute value level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A estimativa da idade gestacional em restos cadavéricos de fetos é importante em contextos forenses. Para esse efeito, os especialistas forenses recorrem à avaliação do padrão de calcificação dentária e/ou ao estudo do esqueleto. Neste último, o comprimento das diáfises de ossos longos é um dos métodos mais utilizados, sendo utilizadas tabelas e equações de regressão de obras pouco actuais ou baseadas em dados ecográficos, cujas medições diferem das efectuadas directamente no osso. Este trabalho tem como objectivo principal a construção de tabelas e equações de regressão para a população Portuguesa, com base na medição das diáfises de fémur, tíbia e úmero, utilizando radiografias post-mortem, que não diferem muito das medições em osso. Pretende-se também determinar qual dos três ossos é mais credível e se existem diferenças significativas entre fetos de género feminino e de género masculino.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Copyright © 2013 Springer Netherlands.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

V Congreso de Eficiencia y Productividad EFIUCO, Córdoba, 19-20 Mayo 2011.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Frame rate upconversion (FRUC) is an important post-processing technique to enhance the visual quality of low frame rate video. A major, recent advance in this area is FRUC based on trilateral filtering which novelty mainly derives from the combination of an edge-based motion estimation block matching criterion with the trilateral filter. However, there is still room for improvement, notably towards reducing the size of the uncovered regions in the initial estimated frame, this means the estimated frame before trilateral filtering. In this context, proposed is an improved motion estimation block matching criterion where a combined luminance and edge error metric is weighted according to the motion vector components, notably to regularise the motion field. Experimental results confirm that significant improvements are achieved for the final interpolated frames, reaching PSNR gains up to 2.73 dB, on average, regarding recent alternative solutions, for video content with varied motion characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

25th Annual Conference of the European Cetacean Society, Cadiz, Spain 21-23 March 2011.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

27th Annual Conference of the European Cetacean Society. Setúbal, Portugal, 8-10 April 2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

27th Annual Conference of the European Cetacean Society. Setúbal, Portugal, 8-10 April 2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A great number of low-temperature geothermal fields occur in Northern-Portugal related to fractured rocks. The most important superficial manifestations of these hydrothermal systems appear in pull-apart tectonic basins and are strongly conditioned by the orientation of the main fault systems in the region. This work presents the interpretation of gravity gradient maps and 3D inversion model produced from a regional gravity survey. The horizontal gradients reveal a complex fault system. The obtained 3D model of density contrast puts into evidence the main fault zone in the region and the depth distribution of the granitic bodies. Their relationship with the hydrothermal systems supports the conceptual models elaborated from hydrochemical and isotopic water analyses. This work emphasizes the importance of the role of the gravity method and analysis to better understand the connection between hydrothermal systems and the fractured rock pattern and surrounding geology. (c) 2013 Elsevier B.V. All rights reserved.