950 resultados para Spatial Database Systems
Resumo:
Tietokantoja käyttävien tietojärjestelmien kriittisyys tietoyhteiskunnan eri osille ja toiminnalle on merkittävä. Tietojenkäsittelyn jatkuvuus ja tietojärjestelmien korkea käytettävyys on pyrittävä turvaamaan mahdollisimman kattavasti joka hetkellä ja vikatilanteista on kyettävä toipumaan työskentelyn ja liiketoiminnan jatkamiseksi. Työn tarkoituksena oli selvittää erilaisia menetelmiä näiden tietokantojen jatkuvaan tiedonvarmistukseen sekä paikallisilla palvelinjärjestelmillä että tietoverkon välityksellä ylläpidettävillä varajärjestelmillä. Paikallisella hyvin suunnitellulla tiedonvarmistuksella vikaantunut tietokanta ja sen tietosisältö kyetään palauttamaan mihinkä tahansa ajanhetkeen ennen vikaantumista. Varajärjestelmät puolestaan voidaan ottaa välittömästi käyttöön kokonaisen konesalin käytön estyessä tai vikaantuessa. Lisäksi useammat konesalit ratkaisusta riippuen voivat palvella käyttäjiään samanaikaisesti tasaten tietojärjestelmän kuormaa, tarjoten lisämahdollisuuksia tietojenkäsittelyyn ja niiden avulla sama tieto voidaan tuoda lähemmäksi palvelemaan käyttäjiään. Työn mielenkiinto kohdistuu lähinnä Oracle-tietokantoja käyttävien tieto-järjestelmien tarjoamiin varmistusvaihtoehtoihin. Kyseiset tietokantajärjestelmät ovat laajassa käytössä niin yritysmaailmassa kuin julkisellakin sektorilla.
On Implementing Joins, Aggregates and Universal Quantifier in Temporal Databases using SQL Standards
Resumo:
A feasible way of implementing a temporal database is by mapping temporal data model onto a conventional data model followed by a commercial database management system. Even though extensions were proposed to standard SQL for supporting temporal databases, such proposals have not yet come across standardization processes. This paper attempts to implement database operators such as aggregates and universal quantifier for temporal databases, implemented on top of relational database systems, using currently available SQL standards.
Resumo:
Die Auszeichnungssprache XML dient zur Annotation von Dokumenten und hat sich als Standard-Datenaustauschformat durchgesetzt. Dabei entsteht der Bedarf, XML-Dokumente nicht nur als reine Textdateien zu speichern und zu transferieren, sondern sie auch persistent in besser strukturierter Form abzulegen. Dies kann unter anderem in speziellen XML- oder relationalen Datenbanken geschehen. Relationale Datenbanken setzen dazu bisher auf zwei grundsätzlich verschiedene Verfahren: Die XML-Dokumente werden entweder unverändert als binäre oder Zeichenkettenobjekte gespeichert oder aber aufgespalten, sodass sie in herkömmlichen relationalen Tabellen normalisiert abgelegt werden können (so genanntes „Flachklopfen“ oder „Schreddern“ der hierarchischen Struktur). Diese Dissertation verfolgt einen neuen Ansatz, der einen Mittelweg zwischen den bisherigen Lösungen darstellt und die Möglichkeiten des weiterentwickelten SQL-Standards aufgreift. SQL:2003 definiert komplexe Struktur- und Kollektionstypen (Tupel, Felder, Listen, Mengen, Multimengen), die es erlauben, XML-Dokumente derart auf relationale Strukturen abzubilden, dass der hierarchische Aufbau erhalten bleibt. Dies bietet zwei Vorteile: Einerseits stehen bewährte Technologien, die aus dem Bereich der relationalen Datenbanken stammen, uneingeschränkt zur Verfügung. Andererseits lässt sich mit Hilfe der SQL:2003-Typen die inhärente Baumstruktur der XML-Dokumente bewahren, sodass es nicht erforderlich ist, diese im Bedarfsfall durch aufwendige Joins aus den meist normalisierten und auf mehrere Tabellen verteilten Tupeln zusammenzusetzen. In dieser Arbeit werden zunächst grundsätzliche Fragen zu passenden, effizienten Abbildungsformen von XML-Dokumenten auf SQL:2003-konforme Datentypen geklärt. Darauf aufbauend wird ein geeignetes, umkehrbares Umsetzungsverfahren entwickelt, das im Rahmen einer prototypischen Applikation implementiert und analysiert wird. Beim Entwurf des Abbildungsverfahrens wird besonderer Wert auf die Einsatzmöglichkeit in Verbindung mit einem existierenden, ausgereiften relationalen Datenbankmanagementsystem (DBMS) gelegt. Da die Unterstützung von SQL:2003 in den kommerziellen DBMS bisher nur unvollständig ist, muss untersucht werden, inwieweit sich die einzelnen Systeme für das zu implementierende Abbildungsverfahren eignen. Dabei stellt sich heraus, dass unter den betrachteten Produkten das DBMS IBM Informix die beste Unterstützung für komplexe Struktur- und Kollektionstypen bietet. Um die Leistungsfähigkeit des Verfahrens besser beurteilen zu können, nimmt die Arbeit Untersuchungen des nötigen Zeitbedarfs und des erforderlichen Arbeits- und Datenbankspeichers der Implementierung vor und bewertet die Ergebnisse.
Resumo:
Building software for Web 2.0 and the Social Media world is non-trivial. It requires understanding how to create infrastructure that will survive at Web scale, meaning that it may have to deal with tens of millions of individual items of data, and cope with hits from hundreds of thousands of users every minute. It also requires you to build tools that will be part of a much larger ecosystem of software and application families. In this lecture we will look at how traditional relational database systems have tried to cope with the scale of Web 2.0, and explore the NoSQL movement that seeks to simplify data-storage and create ultra-swift data systems at the expense of immediate consistency. We will also look at the range of APIs, libraries and interoperability standards that are trying to make sense of the Social Media world, and ask what trends we might be seeing emerge.
Resumo:
Some examples from the book. Connolly, T. M. and C. E. Begg (2005). Database systems : a practical approach to design, implementation, and management. Harlow, Essex, England ; New York, Addison-Wesley.
Resumo:
Background: Sexual risk behaviors associated with poor information on sexuality have contributed to major public health problems in the area of sexual and reproductive health in teenagers and young adults in Colombia. Objective: To measure the perception of changes in sexual and reproductive risk behavior after the use of a teleconsultation service via mobile devices in a sample of young adults. Methods: A before and after observational study was designed, where a mobile application to inquire about sexual and reproductive health was developed. The perception of changes in sexual and reproductive health risk behaviors in a sample of young adults after the use of the application was measured using the validated survey “Family Health International (FHI) – Behavioral Surveillance Survey (BSS) – Survey for Adults between 15 to 40 Years”. Non-probabilistic convenience recruitment was undertaken through the study´s web page. Participants answered the survey online before and after the use of the mobile application for a six month period (intervention). For the inferential analysis, data was divided into three groups (dichotomous data, discrete quantitative data, and ordinal data), to compare the results of the questions between the first and the second survey. For all tests, a confidence interval of 95% was established. For dichotomous data, the Chi-squared test was used. For quantitative data, we used the Student’s t-test, and for ordinal data, the Mann-Whitney-Wilcoxon test. Results: A total of 257 subjects were registered in the study and met the selection criteria. The pre-intervention survey was answered by 232 subjects, and 127 completely answered the post-intervention survey, of which 54.3% did not use the application, leaving an effective population of 58 subjects for analysis. 53% (n=31) were female, and 47% (n=27) were male. The mean age was 21 years, ranging between 18 and 40 years. The differences between the answers on the first and the second survey were not statistically significant. The main risk behaviors identified in the population were homosexual relations, non-use of condoms, sexual relations with non-regular and commercial partners, the use of psychoactive substances, and ignorance about the symptoms of sexually transmitted diseases and HIV transmission. Conclusions: Although there were no differences between the pre- and post-intervention results, the study revealed different risk behaviors among the participating subjects. These findings highlight the importance of promoting educational strategies on this matter and the importance of providing patients with easily accessible tools with reliable health information.
Resumo:
Changes in mature forest cover amount, composition, and configuration can be of significant consequence to wildlife populations. The response of wildlife to forest patterns is of concern to forest managers because it lies at the heart of such competing approaches to forest planning as aggregated vs. dispersed harvest block layouts. In this study, we developed a species assessment framework to evaluate the outcomes of forest management scenarios on biodiversity conservation objectives. Scenarios were assessed in the context of a broad range of forest structures and patterns that would be expected to occur under natural disturbance and succession processes. Spatial habitat models were used to predict the effects of varying degrees of mature forest cover amount, composition, and configuration on habitat occupancy for a set of 13 focal songbird species. We used a spatially explicit harvest scheduling program to model forest management options and simulate future forest conditions resulting from alternative forest management scenarios, and used a process-based fire-simulation model to simulate future forest conditions resulting from natural wildfire disturbance. Spatial pattern signatures were derived for both habitat occupancy and forest conditions, and these were placed in the context of the simulated range of natural variation. Strategic policy analyses were set in the context of current Ontario forest management policies. This included use of sequential time-restricted harvest blocks (created for Woodland caribou (Rangifer tarandus) conservation) and delayed harvest areas (created for American marten (Martes americana atrata) conservation). This approach increased the realism of the analysis, but reduced the generality of interpretations. We found that forest management options that create linear strips of old forest deviate the most from simulated natural patterns, and had the greatest negative effects on habitat occupancy, whereas policy options that specify deferment and timing of harvest for large blocks helped ensure the stable presence of an intact mature forest matrix over time. The management scenario that focused on maintaining compositional targets best supported biodiversity objectives by providing the composition patterns required by the 13 focal species, but this scenario may be improved by adding some broad-scale spatial objectives to better maintain large blocks of interior forest habitat through time.
Resumo:
The acquisition and update of Geographic Information System (GIS) data are typically carried out using aerial or satellite imagery. Since new roads are usually linked to georeferenced pre-existing road network, the extraction of pre-existing road segments may provide good hypotheses for the updating process. This paper addresses the problem of extracting georeferenced roads from images and formulating hypotheses for the presence of new road segments. Our approach proceeds in three steps. First, salient points are identified and measured along roads from a map or GIS database by an operator or an automatic tool. These salient points are then projected onto the image-space and errors inherent in this process are calculated. In the second step, the georeferenced roads are extracted from the image using a dynamic programming (DP) algorithm. The projected salient points and corresponding error estimates are used as input for this extraction process. Finally, the road center axes extracted in the previous step are analyzed to identify potential new segments attached to the extracted, pre-existing one. This analysis is performed using a combination of edge-based and correlation-based algorithms. In this paper we present our approach and early implementation results.
Resumo:
The present work begins with a review of the literature on bit selection methods for oil well drilling. A proposal for the structure and organization of a drilling database and a knowledge base, is described. Previous studies formed the principal elements in the process of selection of drills for proposed drilling. The procedure was implemented as a computer system for the selection of tricone bits. A drilling bit database for three different Brazilian sedimentary basins was obtained for several wells drilled, and knowledge was collected from drilling engineers from different fields both electronically and also by means of interviews. It can be concluded that the selection process showed good results based on tests, which were carried out.
Resumo:
This paper describes a data mining environment for knowledge discovery in bioinformatics applications. The system has a generic kernel that implements the mining functions to be applied to input primary databases, with a warehouse architecture, of biomedical information. Both supervised and unsupervised classification can be implemented within the kernel and applied to data extracted from the primary database, with the results being suitably stored in a complex object database for knowledge discovery. The kernel also includes a specific high-performance library that allows designing and applying the mining functions in parallel machines. The experimental results obtained by the application of the kernel functions are reported. © 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the overall methodology that has been used to encode both the Brazilian Portuguese WordNet (WordNet.Br) standard language-independent conceptual-semantic relations (hyponymy, co-hyponymy, meronymy, cause, and entailment) and the so-called cross-lingual conceptual-semantic relations between different wordnets. Accordingly, after contextualizing the project and outlining the current lexical database structure and statistics, it describes the WordNet.Br editing GUI that was designed to aid the linguist in carrying out the tasks of building synsets, selecting sample sentences from corpora, writing synset concept glosses, and encoding both language-independent conceptual-semantic relations and cross-lingual conceptual-semantic relations between WordNet.Br and Princeton WordNet © Springer-Verlag Berlin Heidelberg 2006.
Resumo:
ArcTech is a software being developed, applied and improved with the aim of becoming an efficient sensitization tool to support the teaching-learning process of Architecture courses. The application deals initially with the thermal comfort of buildings. The output generated by the software shows if a student is able to produce a pleasant environment, in terms of thermal sensation along a 24-hours period. Although one can find the very same characteristics in fully-developed commercial software, the reason to create ArcTech is related to the flexibility of the system to be adapted by the instructor and also to the need of simple tools for the evaluation of specific topics along the courses. The first part of ArcTech is dedicated to data management and that was developed using the visual programming language Delphi 7 and Firebird as the database management system. The second part contains the parameters that can be changed by the system administrator and those related to project visualization. The interface of the system, in which the student will learn how to implement and to evaluate the project alternatives, was built using Macromedia Flash. The software was applied to undergraduate students revealing its easy-learning and easy-teaching interface.
Resumo:
The need for the representation of both semantics and common sense and its organization in a lexical database or knowledge base has motivated the development of large projects, such as Wordnets, CYC and Mikrokosmos. Besides the generic bases, another approach is the construction of ontologies for specific domains. Among the advantages of such approach there is the possibility of a greater and more detailed coverage of a specific domain and its terminology. Domain ontologies are important resources in several tasks related to the language processing, especially in those related to information retrieval and extraction in textual bases. Information retrieval or even question and answer systems can benefit from the domain knowledge represented in an ontology. Besides embracing the terminology of the field, the ontology makes the relationships among the terms explicit. Copyright 2007 ACM.