884 resultados para Incomplete relational database
Resumo:
JenPep is a relational database containing a compendium of thermodynamic binding data for the interaction of peptides with a range of important immunological molecules: the major histocompatibility complex, TAP transporter, and T cell receptor. The database also includes annotated lists of B cell and T cell epitopes. Version 2.0 of the database is implemented in a bespoke postgreSQL database system and is fully searchable online via a perl/HTML interface (URL: http://www.jenner.ac.uk/JenPep).
Resumo:
* The research has been partially supported by INFRAWEBS - IST FP62003/IST/2.3.2.3 Research Project No. 511723 and “Technologies of the Information Society for Knowledge Processing and Management” - IIT-BAS Research Project No. 010061.
Resumo:
The problem of finding the optimal join ordering executing a query to a relational database management system is a combinatorial optimization problem, which makes deterministic exhaustive solution search unacceptable for queries with a great number of joined relations. In this work an adaptive genetic algorithm with dynamic population size is proposed for optimizing large join queries. The performance of the algorithm is compared with that of several classical non-deterministic optimization algorithms. Experiments have been performed optimizing several random queries against a randomly generated data dictionary. The proposed adaptive genetic algorithm with probabilistic selection operator outperforms in a number of test runs the canonical genetic algorithm with Elitist selection as well as two common random search strategies and proves to be a viable alternative to existing non-deterministic optimization approaches.
Resumo:
This paper presents an algorithmic solution for management of related text objects, in which are integrated algorithms for their extraction from paper or electronic format, for their storage and processing in a relational database. The developed algorithms for data extraction and data analysis enable one to find specific features and relations between the text objects from the database. The algorithmic solution is applied to data from the field of phytopharmacy in Bulgaria. It can be used as a tool and methodology for other subject areas where there are complex relationships between text objects.
Resumo:
The paper aims to represent a bilingual online dictionary as a useful tool helping preservation of the natural languages. The author focuses on the approach that was taken to develop compatible bilingual lexical database for the Bulgarian-Polish online dictionary. A formal model for the dictionary encoding is developed in accordance with the complex structures of the dictionary entries. These structures vary depending on the grammatical characteristics of Bulgarian headwords. The Web-application for presentation of the bilingual dictionary is also describred.
Resumo:
In recent years, rough set approach computing issues concerning
reducts of decision tables have attracted the attention of many researchers.
In this paper, we present the time complexity of an algorithm
computing reducts of decision tables by relational database approach. Let
DS = (U, C ∪ {d}) be a consistent decision table, we say that A ⊆ C is a
relative reduct of DS if A contains a reduct of DS. Let s =
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^
Resumo:
Graph-structured databases are widely prevalent, and the problem of effective search and retrieval from such graphs has been receiving much attention recently. For example, the Web can be naturally viewed as a graph. Likewise, a relational database can be viewed as a graph where tuples are modeled as vertices connected via foreign-key relationships. Keyword search querying has emerged as one of the most effective paradigms for information discovery, especially over HTML documents in the World Wide Web. One of the key advantages of keyword search querying is its simplicity—users do not have to learn a complex query language, and can issue queries without any prior knowledge about the structure of the underlying data. The purpose of this dissertation was to develop techniques for user-friendly, high quality and efficient searching of graph structured databases. Several ranked search methods on data graphs have been studied in the recent years. Given a top-k keyword search query on a graph and some ranking criteria, a keyword proximity search finds the top-k answers where each answer is a substructure of the graph containing all query keywords, which illustrates the relationship between the keyword present in the graph. We applied keyword proximity search on the web and the page graph of web documents to find top-k answers that satisfy user’s information need and increase user satisfaction. Another effective ranking mechanism applied on data graphs is the authority flow based ranking mechanism. Given a top- k keyword search query on a graph, an authority-flow based search finds the top-k answers where each answer is a node in the graph ranked according to its relevance and importance to the query. We developed techniques that improved the authority flow based search on data graphs by creating a framework to explain and reformulate them taking in to consideration user preferences and feedback. We also applied the proposed graph search techniques for Information Discovery over biological databases. Our algorithms were experimentally evaluated for performance and quality. The quality of our method was compared to current approaches by using user surveys.
Resumo:
Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.
Resumo:
This poster presentation from the May 2015 Florida Library Association Conference, along with the Everglades Explorer discovery portal at http://ee.fiu.edu, demonstrates how traditional bibliographic and curatorial principles can be applied to: 1) selection, cross-walking and aggregation of metadata linking end-users to wide-spread digital resources from multiple silos; 2) harvesting of select PDFs, HTML and media for web archiving and access; 3) selection of CMS domains, sub-domains and folders for targeted searching using an API. Choosing content for this discovery portal is comparable to past scholarly practice of creating and publishing subject bibliographies, except metadata and data are housed in relational databases. This new and yet traditional capacity coincides with: Growth of bibliographic utilities (MarcEdit); Evolution of open-source discovery systems (eXtensible Catalog); Development of target-capable web crawling and archiving systems (Archive-it); and specialized search APIs (Google). At the same time, historical and technical changes – specifically the increasing fluidity and re-purposing of syndicated metadata – make this possible. It equally stems from the expansion of freely accessible digitized legacy and born-digital resources. Innovation principles helped frame the process by which the thematic Everglades discovery portal was created at Florida International University. The path -- to providing for more effective searching and co-location of digital scientific, educational and historical material related to the Everglades -- is contextualized through five concepts found within Dyer and Christensen’s “The Innovator’s DNA: Mastering the five skills of disruptive innovators (2011). The project also aligns with Ranganathan’s Laws of Library Science, especially the 4th Law -- to "save the time of the user.”
Resumo:
This research deals with the development of a dynamic job quotation system for printed circuit board (PCB) fabrication, which can estimate the price and completion time of a job based on customer preference and current capacity of the shop floor. The primary purpose of building a dynamic quotation system is to maximize the company's profit by quoting optimum lead-time and competitive price for the day-to-day orders received from different customers and original equipment manufacturers. The system was developed using MS-Access relational database. Evaluating the output of the system it was observed that the dynamic system provided more reliable estimation of the lead-time needed for fabricating new jobs. The overall price quoted by the system was competitive with higher profit margin when compared to traditional static systems. This system would therefore provide a vital link between the job quoting and scheduling system of the firm enabling better utilization of the available resources.
Resumo:
The primary purpose of this thesis was to design and develop a prototype e-commerce system where dynamic parameters are included in the decision-making process and execution of an online transaction. The system developed and implemented takes into account previous usage history, priority and associated engineering capabilities. The system was developed using three-tiered client server architecture. The interface was the Internet browser. The middle tiered web server was implemented using Active Server Pages, which form a link between the client system and other servers. A relational database management system formed the data component of the three-tiered architecture. It includes a capability for data warehousing which extracts needed information from the stored data of the customers as well as their orders. The system organizes and analyzes the data that is generated during a transaction to formulate a client's behavior model during and after a transaction. This is used for making decisions like pricing, order rescheduling during a client's forthcoming transaction. The system helps among other things to bring about predictability to a transaction execution process, which could be highly desirable in the current competitive scenario.
Resumo:
This thesis explores the impact international trade and commercial agreements had on the economic and industrial development of Cork during the first industrial revolution. From the Act of Union onwards Cork moved from a region where trade became increasingly reliant on Britain at the expense of trade that had been cultivated over the eighteenth century with the Americas and Europe. The legislative underpinnings of Cork’s trade is the focus of this research and how this changed after the Act of Union. It begins by examining the transatlantic trade of Cork city and the issues faced in the West Indies trade due to the growth of the United States. It will also consider the impact of the Napoleonic Wars on Cork’s trade with both the Americas and continental Europe. The conclusion of the Napoleonic Wars saw the United Kingdom negotiate treaties and agreements that would have a direct impact upon Cork’s merchants. This thesis will address the degree to which the mercantile community in Cork were able to influence policy that directly impacted upon their trade networks. It will then examine the trade between Cork and the United Kingdom and assess the impact of the Union on the ability of Cork’s merchants to affect political change. The operation of the Committee of Merchants in Cork is detailed and their responses to the changing nature of international trade. The thesis finishes by examining the underdevelopment of Cork’s transportation networks. This work will place Cork’s international trade in both its national and international context and argues that Cork’s mercantile community were overly reliant on protectionist legislation to further Cork’s trade as opposed to investment in industrial development. Volumetric data on the trade of Cork city has been transcribed and made available in a relational database to support the arguments made in this thesis and to facilitate future research on this subject. This database is accessible at http://modernirishvenice.com/.
Resumo:
The life cycle of software applications in general is very short and with extreme volatile requirements. Within these conditions programmers need development tools and techniques with an extreme level of productivity. We consider the code reuse as the most prominent approach to solve that problem. Our proposal uses the advantages provided by the Aspect-Oriented Programming in order to build a reusable framework capable to turn both programmer and application oblivious as far as data persistence is concerned, thus avoiding the need to write any line of code about that concern. Besides the benefits to productivity, the software quality increases. This paper describes the actual state of the art, identifying the main challenge to build a complete and reusable framework for Orthogonal Persistence in concurrent environments with support for transactions. The present work also includes a successfully developed prototype of that framework, capable of freeing the programmer of implementing any read or write data operations. This prototype is supported by an object oriented database and, in the future, will also use a relational database and have support for transactions.
Resumo:
No desenvolvimento deste Trabalho de Investigação Aplicada, pretende-se responder à questão: Quais os requisitos necessários a implementar numa base de dados relacional de controlos de segurança da informação para Unidades, Estabelecimentos ou Órgãos militares do Exército Português? Deste modo, para se responder a esta questão central, houve necessidade de subdividir esta em quatro questões derivadas, sendo elas: 1. Quais as principais dimensões de segurança da informação ao nível organizacional? 2. Quais as principais categorias de segurança da informação ao nível organizacional? 3. Quais os principais controlos de segurança da informação a implementar numa organização militar? 4. Quais os requisitos funcionais necessários a implementar numa base de dados de controlos de segurança da informação a implementar numa organização militar? Para responder a estas questões de investigação, este trabalho assenta numa investigação aplicada, com o objetivo de desenvolver uma aplicação prática para os conhecimentos adquiridos, materializando-se assim numa base de dados. Ainda, quanto ao objetivo da investigação, este é descritivo, explicativo e exploratório, uma vez que, tem o objetivo de descrever as principais dimensões, categorias e controlos da segurança da informação, assim como o objetivo de explicar quais são os requisitos funcionais necessários a implementar numa base de dados de controlos de segurança da informação. Por último, tem ainda o objetivo de efetuar um estudo exploratório, comprovando a eficácia da base de dados. Esta investigação assenta no método indutivo, partindo de premissas particulares para chegar a conclusões gerais, isto é, a partir de análise de documentos e de inquéritos por entrevista, identificar-se-ão quais são os requisitos funcionais necessários a implementar, generalizando para todas as Unidades, Estabelecimentos ou Órgãos militares do Exército Português. No que corresponde ao método de procedimentos, usar-se-á o método comparativo, com vista a identificar qual é a norma internacional de gestão de segurança de informação mais indicada a registar na base de dados. Por último, como referido anteriormente, no que concerne às técnicas de investigação, será usado o inquérito por entrevista, identificando os requisitos necessários a implementar, e a análise de documentos, identificando as principais dimensões, categoriasou controlos necessários a implementar numa base de dados de controlos de segurança da informação. Posto isto, numa primeira fase da investigação, através da análise de documentos, percecionam-se as principais dimensões, categorias e controlos de segurança da informação necessários a aplicar nas Unidades, Estabelecimentos ou Órgãos militares do Exército Português, por forma a contribuir para o sucesso na gestão da segurança da informação militar. Ainda, através de entrevistas a especialistas da área de segurança da informação e dos Sistemas de Informação nas unidades militares, identificar-se-ão quais os requisitos funcionais necessários a implementar numa base de dados de controlos de segurança da informação a implementar numa organização militar. Por último, numa segunda fase, através do modelo de desenvolvimento de software em cascata revisto, pretende-se desenvolver uma base de dados relacional, em Microsoft Access, de controlos de segurança da Informação a fim de implementar em Unidades, Estabelecimentos ou Órgãos militares do Exército Português. Posteriormente, após o desenvolvimento da base de dados, pretende-se efetuar um estudo exploratório com vista a validar a mesma, de modo a comprovar se esta responde às necessidades para a qual foi desenvolvida.