946 resultados para Databases as Topic


Relevância:

20.00% 20.00%

Publicador:

Resumo:

La calidad de energía eléctrica incluye la calidad del suministro y la calidad de la atención al cliente. La calidad del suministro a su vez se considera que la conforman dos partes, la forma de onda y la continuidad. En esta tesis se aborda la continuidad del suministro a través de la localización de faltas. Este problema se encuentra relativamente resuelto en los sistemas de transmisión, donde por las características homogéneas de la línea, la medición en ambos terminales y la disponibilidad de diversos equipos, se puede localizar el sitio de falta con una precisión relativamente alta. En sistemas de distribución, sin embargo, la localización de faltas es un problema complejo y aún no resuelto. La complejidad es debida principalmente a la presencia de conductores no homogéneos, cargas intermedias, derivaciones laterales y desbalances en el sistema y la carga. Además, normalmente, en estos sistemas sólo se cuenta con medidas en la subestación, y un modelo simplificado del circuito. Los principales esfuerzos en la localización han estado orientados al desarrollo de métodos que utilicen el fundamental de la tensión y de la corriente en la subestación, para estimar la reactancia hasta la falta. Como la obtención de la reactancia permite cuantificar la distancia al sitio de falta a partir del uso del modelo, el Método se considera Basado en el Modelo (MBM). Sin embargo, algunas de sus desventajas están asociadas a la necesidad de un buen modelo del sistema y a la posibilidad de localizar varios sitios donde puede haber ocurrido la falta, esto es, se puede presentar múltiple estimación del sitio de falta. Como aporte, en esta tesis se presenta un análisis y prueba comparativa entre varios de los MBM frecuentemente referenciados. Adicionalmente se complementa la solución con métodos que utilizan otro tipo de información, como la obtenida de las bases históricas de faltas con registros de tensión y corriente medidos en la subestación (no se limita solamente al fundamental). Como herramienta de extracción de información de estos registros, se utilizan y prueban dos técnicas de clasificación (LAMDA y SVM). Éstas relacionan las características obtenidas de la señal, con la zona bajo falta y se denominan en este documento como Métodos de Clasificación Basados en el Conocimiento (MCBC). La información que usan los MCBC se obtiene de los registros de tensión y de corriente medidos en la subestación de distribución, antes, durante y después de la falta. Los registros se procesan para obtener los siguientes descriptores: a) la magnitud de la variación de tensión ( dV ), b) la variación de la magnitud de corriente ( dI ), c) la variación de la potencia ( dS ), d) la reactancia de falta ( Xf ), e) la frecuencia del transitorio ( f ), y f) el valor propio máximo de la matriz de correlación de corrientes (Sv), cada uno de los cuales ha sido seleccionado por facilitar la localización de la falta. A partir de estos descriptores, se proponen diferentes conjuntos de entrenamiento y validación de los MCBC, y mediante una metodología que muestra la posibilidad de hallar relaciones entre estos conjuntos y las zonas en las cuales se presenta la falta, se seleccionan los de mejor comportamiento. Los resultados de aplicación, demuestran que con la combinación de los MCBC con los MBM, se puede reducir el problema de la múltiple estimación del sitio de falta. El MCBC determina la zona de falta, mientras que el MBM encuentra la distancia desde el punto de medida hasta la falta, la integración en un esquema híbrido toma las mejores características de cada método. En este documento, lo que se conoce como híbrido es la combinación de los MBM y los MCBC, de una forma complementaria. Finalmente y para comprobar los aportes de esta tesis, se propone y prueba un esquema de integración híbrida para localización de faltas en dos sistemas de distribución diferentes. Tanto los métodos que usan los parámetros del sistema y se fundamentan en la estimación de la impedancia (MBM), como aquellos que usan como información los descriptores y se fundamentan en técnicas de clasificación (MCBC), muestran su validez para resolver el problema de localización de faltas. Ambas metodologías propuestas tienen ventajas y desventajas, pero según la teoría de integración de métodos presentada, se alcanza una alta complementariedad, que permite la formulación de híbridos que mejoran los resultados, reduciendo o evitando el problema de la múltiple estimación de la falta.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines current and forthcoming measures related to the exchange of data and information in EU Justice and Home Affairs policies, with a focus on the ‘smart borders’ initiative. It argues that there is no reversibility in the growing reliance on such schemes and asks whether current and forthcoming proposals are necessary and original. It outlines the main challenges raised by the proposals, including issues related to the right to data protection, but also to privacy and non-discrimination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Soil data and reliable soil maps are imperative for environmental management. conservation and policy. Data from historical point surveys, e.g. experiment site data and farmers fields can serve this purpose. However, legacy soil information is not necessarily collected for spatial analysis and mapping such that the data may not have immediately useful geo-references. Methods are required to utilise these historical soil databases so that we can produce quantitative maps of soil propel-ties to assess spatial and temporal trends but also to assess where future sampling is required. This paper discusses two such databases: the Representative Soil Sampling Scheme which has monitored the agricultural soil in England and Wales from 1969 to 2003 (between 400 and 900 bulked soil samples were taken annually from different agricultural fields); and the former State Chemistry Laboratory, Victoria, Australia where between 1973 and 1994 approximately 80,000 soil samples were submitted for analysis by farmers. Previous statistical analyses have been performed using administrative regions (with sharp boundaries) for both databases, which are largely unrelated to natural features. For a more detailed spatial analysis that call be linked to climate and terrain attributes, gradual variation of these soil properties should be described. Geostatistical techniques such as ordinary kriging are suited to this. This paper describes the format of the databases and initial approaches as to how they can be used for digital soil mapping. For this paper we have selected soil pH to illustrate the analyses for both databases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Facilitating the visual exploration of scientific data has received increasing attention in the past decade or so. Especially in life science related application areas the amount of available data has grown at a breath taking pace. In this paper we describe an approach that allows for visual inspection of large collections of molecular compounds. In contrast to classical visualizations of such spaces we incorporate a specific focus of analysis, for example the outcome of a biological experiment such as high throughout screening results. The presented method uses this experimental data to select molecular fragments of the underlying molecules that have interesting properties and uses the resulting space to generate a two dimensional map based on a singular value decomposition algorithm and a self organizing map. Experiments on real datasets show that the resulting visual landscape groups molecules of similar chemical properties in densely connected regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the summer of 1982, the ICLCUA CAFS Special Interest Group defined three subject areas for working party activity. These were: 1) interfaces with compilers and databases, 2) end-user language facilities and display methods, and 3) text-handling and office automation. The CAFS SIG convened one working party to address the first subject with the following terms of reference: 1) review facilities and map requirements onto them, 2) "Database or CAFS" or "Database on CAFS", 3) training needs for users to bridge to new techniques, and 4) repair specifications to cover gaps in software. The working party interpreted the topic broadly as the data processing professional's, rather than the end-user's, view of and relationship with CAFS. This report is the result of the working party's activities. The report content for good reasons exceeds the terms of reference in their strictest sense. For example, we examine QUERYMASTER, which is deemed to be an end-user tool by ICL, from both the DP and end-user perspectives. First, this is the only interface to CAFS in the current SV201. Secondly, it is necessary for the DP department to understand the end-user's interface to CAFS. Thirdly, the other subjects have not yet been addressed by other active working parties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a report on the data-mining of two chess databases, the objective being to compare their sub-7-man content with perfect play as documented in Nalimov endgame tables. Van der Heijden’s ENDGAME STUDY DATABASE IV is a definitive collection of 76,132 studies in which White should have an essentially unique route to the stipulated goal. Chessbase’s BIG DATABASE 2010 holds some 4.5 million games. Insight gained into both database content and data-mining has led to some delightful surprises and created a further agenda.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous analyses of Australian samples have suggested that populations of the same broad racial group (Caucasian, Asian, Aboriginal) tend to be genetically similar across states. This suggests that a single national Australian database for each such group may be feasible, which would greatly facilitate casework. We have investigated samples drawn from each of these groups in different Australian states, and have quantified the genetic homogeneity across states within each racial group in terms of the "coancestry coefficient" F(ST). In accord with earlier results, we find that F(ST) values, as estimated from these data, are very small for Caucasians and Asians, usually <0.5%. We find that "declared" Aborigines (which includes many with partly Aboriginal genetic heritage) are also genetically similar across states, although they display some differentiation from a "pure" Aboriginal population (almost entirely of Aboriginal genetic heritage).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information modelling is a topic that has been researched a great deal, but still many questions around it have not been solved. An information model is essential in the design of a database which is the core of an information system. Currently most of databases only deal with information that represents facts, or asserted information. The ability of capturing semantic aspect has to be improved, and yet other types, such as temporal and intentional information, should be considered. Semantic Analysis, a method of information modelling, has offered a way to handle various aspects of information. It employs the domain knowledge and communication acts as sources of information modelling. It lends itself to a uniform structure whereby semantic, temporal and intentional information can be captured, which builds a sound foundation for building a semantic temporal database.