940 resultados para Data base management.
Resumo:
Includes bibliography
Resumo:
This Project aims to develop methods for data classification in a Data Warehouse for decision-making purposes. We also have as another goal the reduction of an attribute set in a Data Warehouse, in which a given reduced set is capable of keeping the same properties of the original one. Once we achieve a reduced set, we have a smaller computational cost of processing, we are able to identify non-relevant attributes to certain kinds of situations, and finally we are also able to recognize patterns in the database that will help us to take decisions. In order to achieve these main objectives, it will be implemented the Rough Sets algorithm. We chose PostgreSQL as our data base management system due to its efficiency, consolidation and finally, it’s an open-source system (free distribution)
Resumo:
Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).
Resumo:
Il Data Distribution Management (DDM) è un componente dello standard High Level Architecture. Il suo compito è quello di rilevare le sovrapposizioni tra update e subscription extent in modo efficiente. All'interno di questa tesi si discute la necessità di avere un framework e per quali motivi è stato implementato. Il testing di algoritmi per un confronto equo, librerie per facilitare la realizzazione di algoritmi, automatizzazione della fase di compilazione, sono motivi che sono stati fondamentali per iniziare la realizzazione framework. Il motivo portante è stato che esplorando articoli scientifici sul DDM e sui vari algoritmi si è notato che in ogni articolo si creavano dei dati appositi per fare dei test. L'obiettivo di questo framework è anche quello di riuscire a confrontare gli algoritmi con un insieme di dati coerente. Si è deciso di testare il framework sul Cloud per avere un confronto più affidabile tra esecuzioni di utenti diversi. Si sono presi in considerazione due dei servizi più utilizzati: Amazon AWS EC2 e Google App Engine. Sono stati mostrati i vantaggi e gli svantaggi dell'uno e dell'altro e il motivo per cui si è scelto di utilizzare Google App Engine. Si sono sviluppati quattro algoritmi: Brute Force, Binary Partition, Improved Sort, Interval Tree Matching. Sono stati svolti dei test sul tempo di esecuzione e sulla memoria di picco utilizzata. Dai risultati si evince che l'Interval Tree Matching e l'Improved Sort sono i più efficienti. Tutti i test sono stati svolti sulle versioni sequenziali degli algoritmi e che quindi ci può essere un riduzione nel tempo di esecuzione per l'algoritmo Interval Tree Matching.
Resumo:
The CERTS database is now online! Our efforts bear finally fruit! It is possible to visit our database on the web now in French, in English and in about ten European languages in the forthcoming months. Your turn! Connect you to the following address: www.certs-europe.com/database
Resumo:
Quality data are not only relevant for successful Data Warehousing or Business Intelligence applications; they are also a precondition for efficient and effective use of Enterprise Resource Planning (ERP) systems. ERP professionals in all kinds of businesses are concerned with data quality issues, as a survey, conducted by the Institute of Information Systems at the University of Bern, has shown. This paper demonstrates, by using results of this survey, why data quality problems in modern ERP systems can occur and suggests how ERP researchers and practitioners can handle issues around the quality of data in an ERP software Environment.
Resumo:
Oceanographic data collected by ocean research organisations in Russia, the USA, the United Kingdom, Germany, Norway, and Poland for the Barents, Kara and White Seas region are presented in this atlas. Recently declassified naval data from Norway, the USA, and the UK are also included. More than 1,000,000 oceanographic stations containing temperature and/or sea-water salinity data were originally selected. After correcting errors and eliminating duplicates, data from 206,300 checked stations were placed on CD-ROM, together with many figures describing the characteristics of both the single-input and combined data set. In addition, temperature and salinity measurements were interpolated to the following standard horizons: 0, 25, 50, 100, 150, 200, 250, 300 m, and bottom. This atlas covers the 100-year period 1898 to 1998 and is, to date, the most complete oceanographic data collection for these Arctic shelf seas. This data set is complemented by more than 9,000 measurements of sea surface temperature, which were recently digitized from ships' logbooks. They cover the same geographical area within the time period 1867-1912.
Resumo:
This database (Leemans & Cramer 1991) contains monthly averages of mean temperature, temperature range, precipitation, rain days and sunshine hours for the terrestrial surface of the globe, gridded at 0.5 degree longitude/latitude resolution. All grd-files contain the same 62483 pixels in the same order, with 30' latitude and longitude resolution. The coordinates are in degree-decimals and indicate the SW corner of each pixel. Topography is from ETOPO5 and indicates modal elevation. Data were generated from a large data base, using the partial thin-plate splining algorithm (Hutchinson & Bischof 1983). This version is widely used around the globe, notably by all groups participating in the IGBP NPP model intercomparison.