946 resultados para Databases as Topic
Resumo:
Integrum-aineistokoulutuksen 28.9. - 29.9.2011 koulutusmateriaalia
Resumo:
Integrum-aineistokoulutuksen 28.9. - 29.9.2011 koulutusmateriaalia
Resumo:
Tutkimuksen tarkoituksena on esitellä keskustelupalstoja simuloivan korttipelin ”Off Topicin” suunnitteluprosessi pelinkehittäjien näkökulmasta. Tutkimuksen keskiössä on pelitestaus. Miten peliä testattiin, millainen vaikutus pelitestauksilla oli ja miten testaukset muuttivat peliä. Tutkimusaineistona toimivat pelisuunnitteluprosessin aikana syntyneet dokumentit, muistinpanot, pelaajapalautteet, prototyypit sekä tuotantomateriaalit. Off Topic on Turun yliopiston digitaalisen kulttuurin oppiaineen pelijulkaisusarjan ensimmäinen tuote. Tutkimus on luonteeltaan soveltava, sillä se sisältää kirjallisen osion lisäksi myös 504 kappaleen painoksen kyseisestä korttipelistä. Keskeisin tutkimustulos on pelisuunnittelun vaiheiden kuvaaminen mahdollisimman tarkasti siihen liittyvien elementtien vuorovaikutuksen kautta. Peliä suunniteltiin testausten, prototyyppien ja niihin tehtyjen muutosten syklinä, johon vaikuttivat lisäksi meidän pelisuunnittelijoiden omat mielenkiinnonkohteet ja aiemmat kokemukset pelisuunnittelun alalta. Lisäksi tutkimus käsittelee ideoiden syntymistä sekä ongelmaratkaisua etenkin testausten ja luovan ajattelun kautta. Tutkimuksen kautta pystyy näkemään pelisuunnittelun kehityskaaren, joka johti valmiin tuotteen syntymiseen. Tutkimuksessa kulkevat mukana niin epäonnistuneet kuin keskeneräisetkin kokeilut, mutta myös ne ahaa-elämykset, jotka johtivat valmiin pelimekaniikan syntymiseen. Prosessin kuvaaminen on tärkeää senkin vuoksi, että tulevissa peliprojekteissa on mahdollista ottaa näistä oppia. Tutkimus myös tarjoaa yhden mallin siitä miten peliprojekti voi edetä ja mitä vaiheita se pitää sisällään.
Resumo:
Diplomityön tavoitteena on suunnitella ja toteuttaa tehokas sisälogistinen ratkaisu. Työssä tutkitaan varastonhallintaa ja materiaalivirtojen ohjausta sekä näihin kuuluvia prosesseja. Myös toimintaa pyritään tehostamaan. Työssä käytetään konstruktiivista tutkimusotetta, jossa olevassa olevaa ongelmaa lähdetään ratkaisemaan teorian avulla, jonka jälkeen kehitetään ratkaisumalli. Työn aineistona on käytetty aiheeseen liittyvää kirjallisuutta, tieteellisiä artikkeleita, kohdeyrityksen tietokantoja sekä työntekijöiden kanssa käytyjä keskusteluja. Diplomityön seurauksena yrityksellä on käytössään nykyaikainen ja tehokas sisälogistinen järjestelmä, sekä toiminta on yrityksen tahtotilan mukaisesti läpinäkyvämpää. Varastoon sitoutuva pääoma pienenee sekä materiaalin käsittely tehostuu huomattavasti. Varastotoimintojen odotetaan tehostuvan yli 40 % nykyiseen verrattuna.
Resumo:
This paper analyzes the profile of the Brazilian output in the field of multiple sclerosis from 1981 to 2004. The search was conducted through the MEDLINE and LILACS databases, selecting papers in which the term "multiple sclerosis" was defined as the main topic and "Brazil" or "Brasil" as others. The data were analyzed regarding the themes, the state in Brazil and institution where the papers were produced, the journals where the papers were published, journal's impact factor, and language. The search disclosed 141 documents (91 from MEDLINE and LILACS, and 50 from LILACS only) published in 44 different journals (23 of them MEDLINE-indexed). A total of 111 documents were produced by 17 public universities, 29 by 3 private medical schools and 1 by a non-governmental organization. There were 65 original contributions, 37 case reports, 20 reviews, 6 PhD dissertations, 5 guidelines, 2 validation studies, 2 clinical trials, 2 chapters in textbooks, 1 Master of Science thesis, and 1 patient education handout. The journal impact factor ranged from 0.0217 to 6.039 (median 3.03). Of 91 papers from MEDLINE, 65 were published by Arquivos de Neuro-Psiquiatria. More than 90% of the papers were written in Portuguese. São Paulo was the most productive state in the country, followed by Rio de Janeiro, Minas Gerais and Paraná. Eighty-two percent of the Brazilian output came from the Southeastern region.
Resumo:
The goal of this study was to explore and understand the definition of technical debt. Technical debt refers to situation in a software development, where shortcuts or workarounds are taken in technical decision. However, the original definition has been applied to other parts of software development and it is currently difficult to define technical debt. We used mapping study process as a research methodology to collect literature related to the research topic. We collected 159 papers that referred to original definition of technical debt, which were retrieved from scientific literature databases to conduct the search process. We retrieved 107 definitions that were split into keywords. The keyword map is one of the main results of this work. Apart from that, resulting synonyms and different types of technical debt were analyzed and added to the map as branches. Overall, 33 keywords or phrases, 6 synonyms and 17 types of technical debt were distinguished.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
Classical relational databases lack proper ways to manage certain real-world situations including imprecise or uncertain data. Fuzzy databases overcome this limitation by allowing each entry in the table to be a fuzzy set where each element of the corresponding domain is assigned a membership degree from the real interval [0…1]. But this fuzzy mechanism becomes inappropriate in modelling scenarios where data might be incomparable. Therefore, we become interested in further generalization of fuzzy database into L-fuzzy database. In such a database, the characteristic function for a fuzzy set maps to an arbitrary complete Brouwerian lattice L. From the query language perspectives, the language of fuzzy database, FSQL extends the regular Structured Query Language (SQL) by adding fuzzy specific constructions. In addition to that, L-fuzzy query language LFSQL introduces appropriate linguistic operations to define and manipulate inexact data in an L-fuzzy database. This research mainly focuses on defining the semantics of LFSQL. However, it requires an abstract algebraic theory which can be used to prove all the properties of, and operations on, L-fuzzy relations. In our study, we show that the theory of arrow categories forms a suitable framework for that. Therefore, we define the semantics of LFSQL in the abstract notion of an arrow category. In addition, we implement the operations of L-fuzzy relations in Haskell and develop a parser that translates algebraic expressions into our implementation.
Resumo:
Recent scientific advances and new technological developments, most notably the advent of bio-informatics, have led to the emergence of genetic databases with particular characteristics and structures. Paralleling these developments, there has been a proliferation of ethical and legal texts aimed at the regulation of this new form of genetic database.
Resumo:
Les filtres de recherche bibliographique optimisés visent à faciliter le repérage de l’information dans les bases de données bibliographiques qui sont presque toujours la source la plus abondante d’évidences scientifiques. Ils contribuent à soutenir la prise de décisions basée sur les évidences. La majorité des filtres disponibles dans la littérature sont des filtres méthodologiques. Mais pour donner tout leur potentiel, ils doivent être combinés à des filtres permettant de repérer les études couvrant un sujet particulier. Dans le champ de la sécurité des patients, il a été démontré qu’un repérage déficient de l’information peut avoir des conséquences tragiques. Des filtres de recherche optimisés couvrant le champ pourraient s’avérer très utiles. La présente étude a pour but de proposer des filtres de recherche bibliographique optimisés pour le champ de la sécurité des patients, d’évaluer leur validité, et de proposer un guide pour l’élaboration de filtres de recherche. Nous proposons des filtres optimisés permettant de repérer des articles portant sur la sécurité des patients dans les organisations de santé dans les bases de données Medline, Embase et CINAHL. Ces filtres réalisent de très bonnes performances et sont spécialement construits pour les articles dont le contenu est lié de façon explicite au champ de la sécurité des patients par leurs auteurs. La mesure dans laquelle on peut généraliser leur utilisation à d’autres contextes est liée à la définition des frontières du champ de la sécurité des patients.
On Implementing Joins, Aggregates and Universal Quantifier in Temporal Databases using SQL Standards
Resumo:
A feasible way of implementing a temporal database is by mapping temporal data model onto a conventional data model followed by a commercial database management system. Even though extensions were proposed to standard SQL for supporting temporal databases, such proposals have not yet come across standardization processes. This paper attempts to implement database operators such as aggregates and universal quantifier for temporal databases, implemented on top of relational database systems, using currently available SQL standards.
Resumo:
In this paper, we develop a novel index structure to support efficient approximate k-nearest neighbor (KNN) query in high-dimensional databases. In high-dimensional spaces, the computational cost of the distance (e.g., Euclidean distance) between two points contributes a dominant portion of the overall query response time for memory processing. To reduce the distance computation, we first propose a structure (BID) using BIt-Difference to answer approximate KNN query. The BID employs one bit to represent each feature vector of point and the number of bit-difference is used to prune the further points. To facilitate real dataset which is typically skewed, we enhance the BID mechanism with clustering, cluster adapted bitcoder and dimensional weight, named the BID⁺. Extensive experiments are conducted to show that our proposed method yields significant performance advantages over the existing index structures on both real life and synthetic high-dimensional datasets.