906 resultados para Database consistency
Resumo:
In order to guarantee database consistency, a database system should synchronize operations of concurrent transactions. The database component responsible for such synchronization is the scheduler. A scheduler synchronizes operations belonging to different transactions by means of concurrency control protocols. Concurrency control protocols may present different behaviors: in general, a scheduler behavior can be classified as aggressive or conservative. This paper presents the Intelligent Transaction Scheduler (ITS), which has the ability to synchronize the execution of concurrent transactions in an adaptive manner. This scheduler adapts its behavior (aggressive or conservative), according to the characteristics of the computing environment in which it is inserted, using an expert system based on fuzzy logic. The ITS can implement different correctness criteria, such as conventional (syntactic) serializability and semantic serializability. In order to evaluate the performance of the ITS in relation to others schedulers with exclusively aggressive or conservative behavior, it was applied in a dynamic environment, such as a Mobile Database Community (MDBC). An MDBC simulator was developed and many sets of tests were run. The experimentation results, presented herein, prove the efficiency of the ITS in synchronizing transactions in a dynamic environment
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, Programa de Pós-Graducação em Informática, 2016.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
This paper aims at developing a collision prediction model for three-leg junctions located in national roads (NR) in Northern Portugal. The focus is to identify factors that contribute for collision type crashes in those locations, mainly factors related to road geometric consistency, since literature is scarce on those, and to research the impact of three modeling methods: generalized estimating equations, random-effects negative binomial models and random-parameters negative binomial models, on the factors of those models. The database used included data published between 2008 and 2010 of 177 three-leg junctions. It was split in three groups of contributing factors which were tested sequentially for each of the adopted models: at first only traffic, then, traffic and the geometric characteristics of the junctions within their area of influence; and, lastly, factors which show the difference between the geometric characteristics of the segments boarding the junctionsâ area of influence and the segment included in that area were added. The choice of the best modeling technique was supported by the result of a cross validation made to ascertain the best model for the three sets of researched contributing factors. The models fitted with random-parameters negative binomial models had the best performance in the process. In the best models obtained for every modeling technique, the characteristics of the road environment, including proxy measures for the geometric consistency, along with traffic volume, contribute significantly to the number of collisions. Both the variables concerning junctions and the various national highway segments in their area of influence, as well as variations from those characteristics concerning roadway segments which border the already mentioned area of influence have proven their relevance and, therefore, there is a rightful need to incorporate the effect of geometric consistency in the three-leg junctions safety studies.
Resumo:
The GO annotation dataset provided by the UniProt Consortium (GOA: http://www.ebi.ac.uk/GOA) is a comprehensive set of evidenced-based associations between terms from the Gene Ontology resource and UniProtKB proteins. Currently supplying over 100 million annotations to 11 million proteins in more than 360,000 taxa, this resource has increased 2-fold over the last 2 years and has benefited from a wealth of checks to improve annotation correctness and consistency as well as now supplying a greater information content enabled by GO Consortium annotation format developments. Detailed, manual GO annotations obtained from the curation of peer-reviewed papers are directly contributed by all UniProt curators and supplemented with manual and electronic annotations from 36 model organism and domain-focused scientific resources. The inclusion of high-quality, automatic annotation predictions ensures the UniProt GO annotation dataset supplies functional information to a wide range of proteins, including those from poorly characterized, non-model organism species. UniProt GO annotations are freely available in a range of formats accessible by both file downloads and web-based views. In addition, the introduction of a new, normalized file format in 2010 has made for easier handling of the complete UniProt-GOA data set.
Resumo:
The Quaternary Active Faults Database of Iberia (QAFI) is an initiative lead by the Institute of Geology and Mines of Spain (IGME) for building a public repository of scientific data regarding faults having documented activity during the last 2.59 Ma (Quaternary). QAFI also addresses a need to transfer geologic knowledge to practitioners of seismic hazard and risk in Iberia by identifying and characterizing seismogenic fault-sources. QAFI is populated by the information freely provided by more than 40 Earth science researchers, storing to date a total of 262 records. In this article we describe the development and evolution of the database, as well as its internal architecture. Aditionally, a first global analysis of the data is provided with a special focus on length and slip-rate fault parameters. Finally, the database completeness and the internal consistency of the data are discussed. Even though QAFI v.2.0 is the most current resource for calculating fault-related seismic hazard in Iberia, the database is still incomplete and requires further review.
Resumo:
Calculations of the absorption of solar radiation by atmospheric gases, and water vapor in particular, are dependent on the quality of databases of spectral line parameters. There has been increasing scrutiny of databases such as HITRAN in recent years, but this has mostly been performed on a band-by-band basis. We report nine high-spectral-resolution (0.03 cm(-1)) measurements of the solar radiation reaching the surface in southern England over the wave number range 2000 to 12,500 cm(-1) (0.8 to 5 mm) that allow a unique assessment of the consistency of the spectral line databases over this entire spectral region. The data are assessed in terms of the modeled water vapor column that is required to bring calculations and observations into agreement; for an entirely consistent database, this water vapor column should be constant with frequency. For the HITRAN01 database, the spread in water vapor column is about 11%, with distinct shifts between different spectral regions. The HITRAN04 database is in significantly better agreement (about 5% spread) in the completely updated 3000 to 8000 cm(-1) spectral region, but inconsistencies between individual spectral regions remain: for example, in the 8000 to 9500 cm(-1) spectral region, the results indicate an 18% (+/- 1%) underestimate in line intensities with respect to the 3000 to 8000 cm(-1) region. These measurements also indicate the impact of isotopic fractionation of water vapor in the 2500 to 2900 cm(-1) range, where HDO lines dominate over the lines of the most abundant isotope of H2O.
Resumo:
We report on the consistency of water vapour line intensities in selected spectral regions between 800–12,000 cm−1 under atmospheric conditions using sun-pointing Fourier transform infrared spectroscopy. Measurements were made across a number of days at both a low and high altitude field site, sampling a relatively moist and relatively dry atmosphere. Our data suggests that across most of the 800–12,000 cm−1 spectral region water vapour line intensities in recent spectral line databases are generally consistent with what was observed. However, we find that HITRAN-2008 water vapour line intensities are systematically lower by up to 20% in the 8000–9200 cm−1 spectral interval relative to other spectral regions. This discrepancy is essentially removed when two new linelists (UCL08, a compilation of linelists and ab-initio calculations, and one based on recent laboratory measurements by Oudot et al. (2010) [10] in the 8000–9200 cm−1 spectral region) are used. This strongly suggests that the H2O line strengths in the HITRAN-2008 database are indeed underestimated in this spectral region and in need of revision. The calculated global-mean clear-sky absorption of solar radiation is increased by about 0.3 W m−2 when using either the UCL08 or Oudot line parameters in the 8000–9200 cm−1 region, instead of HITRAN-2008. We also found that the effect of isotopic fractionation of HDO is evident in the 2500–2900 cm−1 region in the observations.
Resumo:
A database containing the global and diffuse components of the surface solar hourly irradiation measured from 1 January 2004 to 31 December 2010 at eight stations of the Egyptian Meteorological Authority is presented. For three of these sites (Cairo, Aswan, and El-Farafra), the direct component is also available. In addition, a series of meteorological variables including surface pressure, relative humidity, temperature, wind speed and direction is provided at the same hourly resolution at all stations. The details of the experimental sites and instruments used for the acquisition are given. Special attention is paid to the quality of the data and the procedure applied to flag suspicious or erroneous measurements is described in details. Between 88 and 99% of the daytime measurements are validated by this quality control. Except at Barrani where the number is lower (13500), between 20000 and 29000 measurements of global and diffuse hourly irradiation are available at all sites for the 7-year period. Similarly, from 9000 to 13000 measurements of direct hourly irradiation values are provided for the three sites where this component is measured. With its high temporal resolution this consistent irradiation and meteorological database constitutes a reliable source to estimate the potential of solar energy in Egypt. It is also adapted to the study of high-frequency atmospheric processes such as the impact of aerosols on atmospheric radiative transfer. In the next future, it is planned to complete regularly the present 2004-2010 database.
Resumo:
One of the most demanding needs in cloud computing is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. In this paper, we examine three eager database replication systems available today: Middle-R, C-JDBC and MySQL Cluster using TPC-W benchmark. We analyze their architecture, replication protocols and compare the performance both in the absence of failures and when there are failures.
Resumo:
In this paper, the authors introduce a novel mechanism for data management in a middleware for smart home control, where a relational database and semantic ontology storage are used at the same time in a Data Warehouse. An annotation system has been designed for instructing the storage format and location, registering new ontology concepts and most importantly, guaranteeing the Data Consistency between the two storage methods. For easing the data persistence process, the Data Access Object (DAO) pattern is applied and optimized to enhance the Data Consistency assurance. Finally, this novel mechanism provides an easy manner for the development of applications and their integration with BATMP. Finally, an application named "Parameter Monitoring Service" is given as an example for assessing the feasibility of the system.
Resumo:
One of the most demanding needs in cloud computing and big data is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. This thesis provides an in depth study of three eager database replication systems based on relational systems: Middle-R, C-JDBC and MySQL Cluster and three systems based on In-Memory Data Grids: JBoss Data Grid, Oracle Coherence and Terracotta Ehcache. Thesis explore these systems based on their architecture, replication protocols, fault tolerance and various other functionalities. It also provides experimental analysis of these systems using state-of-the art benchmarks: TPC-C and TPC-W (for relational systems) and Yahoo! Cloud Serving Benchmark (In- Memory Data Grids). Thesis also discusses three Graph Databases, Neo4j, Titan and Sparksee based on their architecture and transactional capabilities and highlights the weaker transactional consistencies provided by these systems. It discusses an implementation of snapshot isolation in Neo4j graph database to provide stronger isolation guarantees for transactions.
Resumo:
"UILU-ENG 83-1724."--Cover.
Resumo:
This paper discusses the advantages of database-backed websites and describes the model for a library website implemented at the University of Nottingham using open source software, PHP and MySQL. As websites continue to grow in size and complexity it becomes increasingly important to introduce automation to help manage them. It is suggested that a database-backed website offers many advantages over one built from static HTML pages. These include a consistency of style and content, the ability to present different views of the same data, devolved editing and enhanced security. The University of Nottingham Library Services website is described and issues surrounding its design, technological implementation and management are explored.