937 resultados para relational databases


Relevância:

60.00% 60.00%

Publicador:

Resumo:

During the last decades, we assisted to what is called “information explosion”. With the advent of the new technologies and new contexts, the volume, velocity and variety of data has increased exponentially, becoming what is known today as big data. Among them, we emphasize telecommunications operators, which gather, using network monitoring equipment, millions of network event records, the Call Detail Records (CDRs) and the Event Detail Records (EDRs), commonly known as xDRs. These records are stored and later processed to compute network performance and quality of service metrics. With the ever increasing number of collected xDRs, its generated volume needing to be stored has increased exponentially, making the current solutions based on relational databases not suited anymore. To tackle this problem, the relational data store can be replaced by Hadoop File System (HDFS). However, HDFS is simply a distributed file system, this way not supporting any aspect of the relational paradigm. To overcome this difficulty, this paper presents a framework that enables the current systems inserting data into relational databases, to keep doing it transparently when migrating to Hadoop. As proof of concept, the developed platform was integrated with the Altaia - a performance and QoS management of telecommunications networks and services.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents a cloud-based software platform for sharing publicly available scientific datasets. The proposed platform leverages the potential of NoSQL databases and asynchronous IO technologies, such as Node.JS, in order to achieve high performances and flexible solutions. This solution will serve two main groups of users. The dataset providers, which are the researchers responsible for sharing and maintaining datasets, and the dataset users, that are those who desire to access the public data. To the former are given tools to easily publish and maintain large volumes of data, whereas the later are given tools to enable the preview and creation of subsets of the original data through the introduction of filter and aggregation operations. The choice of NoSQL over more traditional RDDMS emerged from and extended benchmark between relational databases (MySQL) and NoSQL (MongoDB) that is also presented in this thesis. The obtained results come to confirm the theoretical guarantees that NoSQL databases are more suitable for the kind of data that our system users will be handling, i. e., non-homogeneous data structures that can grow really fast. It is envisioned that a platform like this can lead the way to a new era of scientific data sharing where researchers are able to easily share and access all kinds of datasets, and even in more advanced scenarios be presented with recommended datasets and already existing research results on top of those recommendations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents a cloud-based software platform for sharing publicly available scientific datasets. The proposed platform leverages the potential of NoSQL databases and asynchronous IO technologies, such as Node.JS, in order to achieve high performances and flexible solutions. This solution will serve two main groups of users. The dataset providers, which are the researchers responsible for sharing and maintaining datasets, and the dataset users, that are those who desire to access the public data. To the former are given tools to easily publish and maintain large volumes of data, whereas the later are given tools to enable the preview and creation of subsets of the original data through the introduction of filter and aggregation operations. The choice of NoSQL over more traditional RDDMS emerged from and extended benchmark between relational databases (MySQL) and NoSQL (MongoDB) that is also presented in this thesis. The obtained results come to confirm the theoretical guarantees that NoSQL databases are more suitable for the kind of data that our system users will be handling, i. e., non-homogeneous data structures that can grow really fast. It is envisioned that a platform like this can lead the way to a new era of scientific data sharing where researchers are able to easily share and access all kinds of datasets, and even in more advanced scenarios be presented with recommended datasets and already existing research results on top of those recommendations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications. Integration between applications can face several troublesome due the fact that applications may not have been designed and implemented having integration in mind. Regarding to integration issues, two tier software systems, composed by the database tier and by the “front-end” tier (interface), have shown some limitations. As a solution to overcome the two tier limitations, three tier systems were proposed in the literature. Thus, by adding a middle-tier (referred as middleware) between the database tier and the “front-end” tier (or simply referred application), three main benefits emerge. The first benefit is related with the fact that the division of software systems in three tiers enables increased integration capabilities with other systems. The second benefit is related with the fact that any modifications to the individual tiers may be carried out without necessarily affecting the other tiers and integrated systems and the third benefit, consequence of the others, is related with less maintenance tasks in software system and in all integrated systems. Concerning software development in three tiers, this dissertation focus on two emerging technologies, Semantic Web and Service Oriented Architecture, combined with middleware. These two technologies blended with middleware, which resulted in the development of Swoat framework (Service and Semantic Web Oriented ArchiTecture), lead to the following four synergic advantages: (1) allow the creation of loosely-coupled systems, decoupling the database from “front-end” tiers, therefore reducing maintenance; (2) the database schema is transparent to “front-end” tiers which are aware of the information model (or domain model) that describes what data is accessible; (3) integration with other heterogeneous systems is allowed by providing services provided by the middleware; (4) the service request by the “frontend” tier focus on ‘what’ data and not on ‘where’ and ‘how’ related issues, reducing this way the application development time by developers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multi-relational data mining enables pattern mining from multiple tables. The existing multi-relational mining association rules algorithms are not able to process large volumes of data, because the amount of memory required exceeds the amount available. The proposed algorithm MRRadix presents a framework that promotes the optimization of memory usage. It also uses the concept of partitioning to handle large volumes of data. The original contribution of this proposal is enable a superior performance when compared to other related algorithms and moreover successfully concludes the task of mining association rules in large databases, bypass the problem of available memory. One of the tests showed that the MR-Radix presents fourteen times less memory usage than the GFP-growth. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The changes introduced into the European Higher Education Area (EHEA) by the Bologna Process, together with renewed pedagogical and methodological practices, have created a new teaching-learning paradigm: Student-Centred Learning. In addition, the last few years have been characterized by the application of Information Technologies, especially the Semantic Web, not only to the teaching-learning process, but also to administrative processes within learning institutions. On one hand, the aim of this study was to present a model for identifying and classifying Competencies and Learning Outcomes and, on the other hand, the computer applications of the information management model were developed, namely a relational Database and an Ontology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Memoria de TFC en el que se analiza el estándar SQL:1999 y se compara con PostgreeSQL y Oracle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A feasible way of implementing a temporal database is by mapping temporal data model onto a conventional data model followed by a commercial database management system. Even though extensions were proposed to standard SQL for supporting temporal databases, such proposals have not yet come across standardization processes. This paper attempts to implement database operators such as aggregates and universal quantifier for temporal databases, implemented on top of relational database systems, using currently available SQL standards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DBMODELING is a relational database of annotated comparative protein structure models and their metabolic, pathway characterization. It is focused on enzymes identified in the genomes of Mycobacterium tuberculosis and Xylella fastidiosa. The main goal of the present database is to provide structural models to be used in docking simulations and drug design. However, since the accuracy of structural models is highly dependent on sequence identity between template and target, it is necessary to make clear to the user that only models which show high structural quality should be used in such efforts. Molecular modeling of these genomes generated a database, in which all structural models were built using alignments presenting more than 30% of sequence identity, generating models with medium and high accuracy. All models in the database are publicly accessible at http://www.biocristalografia.df.ibilce.unesp.br/tools. DBMODELING user interface provides users friendly menus, so that all information can be printed in one stop from any web browser. Furthermore, DBMODELING also provides a docking interface, which allows the user to carry out geometric docking simulation, against the molecular models available in the database. There are three other important homology model databases: MODBASE, SWISSMODEL, and GTOP. The main applications of these databases are described in the present article. © 2007 Bentham Science Publishers Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GlycoSuiteDB is a relational database that curates information from the scientific literature on glyco­protein derived glycan structures, their biological sources, the references in which the glycan was described and the methods used to determine the glycan structure. To date, the database includes most published O-linked oligosaccharides from the last 50 years and most N-linked oligosaccharides that were published in the 1990s. For each structure, information is available concerning the glycan type, linkage and anomeric configuration, mass and composition. Detailed information is also provided on native and recombinant sources, including tissue and/or cell type, cell line, strain and disease state. Where known, the proteins to which the glycan structures are attached are reported, and cross-references to the SWISS-PROT/TrEMBL protein sequence databases are given if applicable. The GlycoSuiteDB annotations include literature references which are linked to PubMed, and detailed information on the methods used to determine each glycan structure are noted to help the user assess the quality of the structural assignment. GlycoSuiteDB has a user-friendly web interface which allows the researcher to query the database using mono­isotopic or average mass, monosaccharide composition, glycosylation linkages (e.g. N- or O-linked), reducing terminal sugar, attached protein, taxonomy, tissue or cell type and GlycoSuiteDB accession number. Advanced queries using combinations of these parameters are also possible. GlycoSuiteDB can be accessed on the web at http://www.glycosuite.com.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural Language Interfaces to Query Databases (NLIDBs) have been an active research field since the 1960s. However, they have not been widely adopted. This article explores some of the biggest challenges and approaches for building NLIDBs and proposes techniques to reduce implementation and adoption costs. The article describes {AskMe*}, a new system that leverages some of these approaches and adds an innovative feature: query-authoring services, which lower the entry barrier for end users. Advantages of these approaches are proven with experimentation. Results confirm that, even when {AskMe*} is automatically reconfigurable against multiple domains, its accuracy is comparable to domain-specific NLIDBs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When object databases arrived on the scene some ten years ago, they provided database capabilities for previously neglected, complex applications, such as CAD, but were burdened with one inherent teething problem, poor performance. Physical database design is one tool that can provide performance improvements and it is the general area of concern for this thesis. Clustering is one fruitful design technique which can provide improvements in performance. However, clustering in object databases has not been explored in depth and so has not been truly exploited. Further, clustering, although a physical concern, can be determined from the logical model. The object model is richer than previous models, notably the relational model, and so it is anticipated that the opportunities with respect to clustering are greater. This thesis provides a thorough analysis of object clustering strategies with a view to highlighting any links between the object logical and physical model and improving performance. This is achieved by considering all possible types of object logical model construct and the implementation of those constructs in terms of theoretical clusterings strategies to produce actual clustering arrangements. This analysis results in a greater understanding of object clustering strategies, aiding designers in the development process and providing some valuable rules of thumb to support the design process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Database systems have a user interface one of the components of which will normally be a query language which is based on a particular data model. Typically data models provide primitives to define, manipulate and query databases. Often these primitives are designed to form self-contained query languages. This thesis describes a prototype implementation of a system which allows users to specify queries against the database in a query language whose primitives are not those provided by the actual model on which the database system is based, but those provided by a different data model. The implementation chosen is the Functional Query Language Front End (FQLFE). This uses the Daplex functional data model and query language. Using FQLFE, users can specify the underlying database (based on the relational model) in terms of Daplex. Queries against this specified view can then be made in Daplex. FQLFE transforms these queries into the query language (Quel) of the underlying target database system (Ingres). The automation of part of the Daplex function definition phase is also described and its implementation discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes the design and implementation of computer-aided tool called Relational Algebra Translator (RAT) in data base courses, for the teaching of relational algebra. There was a problem when introducing the relational algebra topic in the course EIF 211 Design and Implementation of Databases, which belongs to the career of Engineering in Information Systems of the National University of Costa Rica, because students attending this course were lacking profound mathematical knowledge, which led to a learning problem, being this an important subject to understand what the data bases search and request do RAT comes along to enhance the teaching-learning process.It introduces the architectural and design principles required for its implementation, such as: the language symbol table, the gramatical rules and the basic algorithms that RAT uses to translate from relational algebra to SQL language. This tool has been used for one periods and has demonstrated to be effective in the learning-teaching process.  This urged investigators to publish it in the web site: www.slinfo.una.ac.cr in order for this tool to be used in other university courses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments. (C) 2007 Elsevier B.V. All rights reserved.