957 resultados para database systems
Resumo:
This paper will propose that, rather than sitting on silos of data, historians that utilise quantitative methods should endeavour to make their data accessible through databases, and treat this as a new form of bibliographic entry. Of course in many instances historical data does not lend itself easily to the creation of such data sets. With this in mind some of the issues regarding normalising raw historical data will be looked at with reference to current work on nineteenth century Irish trade. These issues encompass (but are not limited to) measurement systems, geographic locations, and potential problems that may arise in attempting to unify disaggregated sources. It will discuss the need for a concerted effort by historians to define what is required from digital resources for them to be considered accurate, and to what extent the normalisation requirements for database systems may conflict with the desire for accuracy. Many of the issues that the historian may encounter engaging with databases will be common to all historians, and there would be merit in having defined standards for referencing items, such as people, places, locations, and measurements.
Resumo:
A selecção do tema e consequente trabalho de que emerge o titulo desta dissertação decorreu do facto de se ter tomado conhecimento da necessidade que os membros do projecto FCOMP-01-0124-FEDER-007360 - Inquirir da Honra: Comissários do Santo Oficio e das Ordens Militares em Portugal (1570 - 1773) tiveram para satisfazer alguns objectivos em particular relacionados com a Genealogia da rede de Comissários. O sistema de trabalho manual que até aqui era utilizado, continha uma quantidade considerável de informação complexa, descrevendo ao pormenor as características não só dos indivíduos, mas também do que estava associado ao mesmo, incluindo quem e como se relacionava com as demais figuras. O principal objectivo consistiu assim em responder à pergunta: "Como será possível efectuar uma gestão de toda a informação genealógica recolhida no papel e permitir a sua análise no computador, recorrendo a tecnologias que, por um lado sejam eficientes, e por outro, fáceis de aprender pelos utilizadores?". Para conseguir responder à questão, foi necessário conhecer em primeira mão, o universo da Genealogia e a forma como opera, para que posteriormente, se desenhasse e moldasse toda uma aplicação às necessidades do utilizador. No entanto, a aplicação não se centra apenas em permitir ao utilizador efectuar uma gestão, recorrendo a um sistema de gestão de bases de dados MySQL e permitir a análise genealógica "tradicional" em programas como o Personal Ancestral File. Pretende-se sobretudo, que o utilizador faça uso e responda às perguntas "do presente" esperando que a própria aplicação sirva de motivação para novas perguntas, com a integração da tecnologia XML e do Sistema de Informação Geográfico, Google Earth, permitindo assim a análise de informação genealógica no mapa-mundo. ABSTRACT: The choice of this essay's work subject is set on the need to accomplish determinate goals related with the Genealogy of the network lnquisition Commissioners on behalf of the project FCOMP-01-0124-FEDER-007360 members - Inquirir da Honra: Comissários do Santo Ofício e das Ordens Militares em Portugal (1570 - 1773)- To Inquire Honor: Inquisition Commissioners and the Military Orders in Portugal. The manual work system used till now presented a considerable amount of complex information, describing in detail characteristics not only of individuals but also of what is associated to it, including whoandhow. The main goal aimed at thus responding to: «How could it be possible to select and examine all the genealogical data registered on paper and allow it to be analyzed on computer, by means of technology that on one hand are efficient and on other hand easy to learn by its users? ». ln order to get to the answer to that matter, it was necessary to acknowledge firstly the Genealogy's universe so afterwards it could be possible to outline and shape an entire application to user needs. Nevertheless, the application does not only focus on allowing the user to carry out the system’s management, using MySQL database management system and allowing the "traditional" genealogical management in programs such as the Personal Ancestral File. Above all the user should get involved with it and answer the key questions of 'the present’ hoping that the application serves by itself as motivation to arouse new questions with the integration of XML technology and Geographic Information System, Google Earth, thus allowing the analysis of genealogical information worldwide.
Resumo:
This paper describes the design and implementation of ADAMIS (‘A database for medical information systems’). ADAMIS is a relational database management system for a general hospital environment. Apart from the usual database (DB) facilities of data definition and data manipulation, ADAMIS supports a query language called the ‘simplified medical query language’ (SMQL) which is completely end-user oriented and highly non-procedural. Other features of ADAMIS include provision of facilities for statistics collection and report generation. ADAMIS also provides adequate security and integrity features and has been designed mainly for use on interactive terminals.
Resumo:
ReefBase a global database of coral reefs systems and their resources was initiated at International Center for Living Aquatic Resources Management (ICLARM), Philippines in November 1993. The CEC has provided funding for the first two years and the database was developed in collaboration with the World Conservation Monitoring Centre in Cambridge, UK, as well as other national, regional, and international institutions. The ReefBase project activities and what ICLARM will do to accomplish the project objectives are briefly discussed.
Resumo:
This paper describes work performed as part of the U.K. Alvey sponsored Voice Operated Database Inquiry System (VODIS) project in the area of intelligent dialogue control. The principal aims of the work were to develop a habitable interface for the untrained user; to investigate the degree to which dialogue control can be used to compensate for deficiencies in recognition performance; and to examine the requirements on dialogue control for generating natural speech output. A data-driven methodology is described based on the use of frames in which dialogue topics are organized hierarchically. The concept of a dynamically adjustable scope is introduced to permit adaptation to recognizer performance and the use of historical and hierarchical contexts are described to facilitate the construction of contextually relevant output messages. © 1989.
Resumo:
Our ability to identify, acquire, store, enquire on and analyse data is increasing as never before, especially in the GIS field. Technologies are becoming available to manage a wider variety of data and to make intelligent inferences on that data. The mainstream arrival of large-scale database engines is not far away. The experience of using the first such products tells us that they will radically change data management in the GIS field.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
One of the most demanding needs in cloud computing is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. In this paper, we examine three eager database replication systems available today: Middle-R, C-JDBC and MySQL Cluster using TPC-W benchmark. We analyze their architecture, replication protocols and compare the performance both in the absence of failures and when there are failures.
Resumo:
One of the most demanding needs in cloud computing and big data is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. This thesis provides an in depth study of three eager database replication systems based on relational systems: Middle-R, C-JDBC and MySQL Cluster and three systems based on In-Memory Data Grids: JBoss Data Grid, Oracle Coherence and Terracotta Ehcache. Thesis explore these systems based on their architecture, replication protocols, fault tolerance and various other functionalities. It also provides experimental analysis of these systems using state-of-the art benchmarks: TPC-C and TPC-W (for relational systems) and Yahoo! Cloud Serving Benchmark (In- Memory Data Grids). Thesis also discusses three Graph Databases, Neo4j, Titan and Sparksee based on their architecture and transactional capabilities and highlights the weaker transactional consistencies provided by these systems. It discusses an implementation of snapshot isolation in Neo4j graph database to provide stronger isolation guarantees for transactions.