14 resultados para Databases as Topic
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Selostus: Maatalous- ja elintarviketieteiden www-pohjaiset viitetietokannat ja aihehakemistot - suomalaisen tiedonetsijän näkökulma
Resumo:
This thesis is about detection of local image features. The research topic belongs to the wider area of object detection, which is a machine vision and pattern recognition problem where an object must be detected (located) in an image. State-of-the-art object detection methods often divide the problem into separate interest point detection and local image description steps, but in this thesis a different technique is used, leading to higher quality image features which enable more precise localization. Instead of using interest point detection the landmark positions are marked manually. Therefore, the quality of the image features is not limited by the interest point detection phase and the learning of image features is simplified. The approach combines both interest point detection and local description into one phase for detection. Computational efficiency of the descriptor is therefore important, leaving out many of the commonly used descriptors as unsuitably heavy. Multiresolution Gabor features has been the main descriptor in this thesis and improving their efficiency is a significant part. Actual image features are formed from descriptors by using a classifierwhich can then recognize similar looking patches in new images. The main classifier is based on Gaussian mixture models. Classifiers are used in one-class classifier configuration where there are only positive training samples without explicit background class. The local image feature detection method has been tested with two freely available face detection databases and a proprietary license plate database. The localization performance was very good in these experiments. Other applications applying the same under-lying techniques are also presented, including object categorization and fault detection.
Resumo:
In the present dissertation, multilingual thesauri were approached as cultural products and the focus was twofold: On the empirical level the focus was placed on the translatability of certain British-English social science indexing terms into the Finnish language and culture at a concept, a term and an indexing term level. On the theoretical level the focus was placed on the aim of translation and on the concept of equivalence. In accordance with modern communicative and dynamic translation theories the interest was on the human dimension. The study is qualitative. In this study, equivalence was understood in a similar way to how dynamic, functional equivalence is commonly understood in translation studies. Translating was seen as a decision-making process, where a translator often has different kinds of possibilities to choose in order to fulfil the function of the translation. Accordingly, and as a starting point for the construction of the empirical part, the function of the source text was considered to be the same or similar to the function of the target text, that is, a functional thesaurus both in source and target context. Further, the study approached the challenges of multilingual thesaurus construction from the perspectives of semantics and pragmatics. In semantic analysis the focus was on what the words conventionally mean and in pragmatics on the ‘invisible’ meaning - or how we recognise what is meant even when it is not actually said (or written). Languages and ideas expressed by languages are created mainly in accordance with expressional needs of the surrounding culture and thesauri were considered to reflect several subcultures and consequently the discourses which represent them. The research material consisted of different kinds of potential discourses: dictionaries, database records, and thesauri, Finnish versus British social science researches, Finnish versus British indexers, simulated indexing tasks with five articles and Finnish versus British thesaurus constructors. In practice, the professional background of the two last mentioned groups was rather similar. It became even more clear that all the material types had their own characteristics, although naturally not entirely separate from each other. It is further noteworthy that the different types and origins of research material were not used to represent true comparison pairs, and that the aim of triangulation of methods and material was to gain a holistic view. The general research questions were: 1. Can differences be found between Finnish and British discourses regarding family roles as thesaurus terms, and if so, what kinds of differences and which are the implications for multilingual thesaurus construction? 2. What is the pragmatic indexing term equivalence? The first question studied how the same topic (family roles) was represented in different contexts and by different users, and further focused on how the possible differences were handled in multilingual thesaurus construction. The second question was based on findings of the previous one, and answered to the final question as to what kinds of factors should be considered when defining translation equivalence in multilingual thesaurus construction. The study used multiple cases and several data collection and analysis methods aiming at theoretical replication and complementarity. The empirical material and analysis consisted of focused interviews (with Finnish and British social scientists, thesaurus constructors and indexers), simulated indexing tasks with Finnish and British indexers, semantic component analysis of dictionary definitions and translations, coword analysis and datasets retrieved in databases, and discourse analysis of thesauri. As a terminological starting point a topic and case family roles was selected. The results were clear: 1) It was possible to identify different discourses. There also existed subdiscourses. For example within the group of social scientists the orientation to qualitative versus quantitative research had an impact on the way they reacted to the studied words and discourses, and indexers placed more emphasis on the information seekers whereas thesaurus constructors approached the construction problems from a more material based solution. The differences between the different specialist groups i.e. the social scientists, the indexers and the thesaurus constructors were often greater than between the different geo-cultural groups i.e. Finnish versus British. The differences occurred as a result of different translation aims, diverging expectations for multilingual thesauri and variety of practices. For multilingual thesaurus construction this means severe challenges. The clearly ambiguous concept of multilingual thesaurus as well as different construction and translation strategies should be considered more precisely in order to shed light on focus and equivalence types, which are clearly not self-evident. The research also revealed the close connection between the aims of multilingual thesauri and the pragmatic indexing term equivalence. 2) The pragmatic indexing term equivalence is very much context-depended. Although thesaurus term equivalence is defined and standardised in the field of library and information science (LIS), it is not understood in one established way and the current LIS tools are inadequate to provide enough analytical tools for both constructing and studying different kinds of multilingual thesauri as well as their indexing term equivalence. The tools provided in translation science were more practical and theoretical, and especially the division of different meanings of a word provided a useful tool in analysing the pragmatic equivalence, which often differs from the ideal model represented in thesaurus construction literature. The study thus showed that the variety of different discourses should be acknowledged, there is a need for operationalisation of new types of multilingual thesauri, and the factors influencing pragmatic indexing term equivalence should be discussed more precisely than is traditionally done.
Resumo:
The aim of this study is to explore how a new concept appears inscientific discussion and research, how it diffuses to other fields and out of the scientific communities, and how the networks are formed around the concept. Text and terminology take the interest of a reader in the digital environment. Texts create networks where the terminology used is dependent on the ideas, viewsand paradigms of the field. This study is based mainly on bibliographic data. Materials for bibliometric studies have been collected from different databases. The databases are also evaluated and their quality and coverage are discussed. The thesauri of those databases that have been selected for a more in depth study have also been evaluated. The material selected has been used to study how long and in which ways an innovative publication, which can be seen as a milestone in a specific field, influences the research. The concept that has been chosen as a topic for this research is Social Capital, because it has been a popular concept in different scientific fields as well as in everyday speech and the media. It seemed to be a `fashion concept´ that appeared in different situations at the Millennium. The growth and diffusion of social capital publications has been studied. The terms connected with social capital in different fields and different stages of the development have also been analyzed. The methods that have been used in this study are growth and diffusion analysis, content analysis, citation analysis, coword analysis and cocitation analysis. One method that can be used tounderstand and to interpret results of these bibliometric studies is to interview some key persons, who are known to have a gatekeeper position in the diffusion of the concept. Thematic interviews with some Finnish researchers and specialists that have influenced the diffusion of social capital into Finnish scientificand social discussions provide background information. iv The Milestone Publications on social capital have been chosen and studied. They give answers to the question "What is Social Capital?" By comparing citations to Milestone Publications with the growth of all social capital publications in a database, we can drawconclusions about the point at which social capital became generally approved `tacit knowledge´. The contribution of the present study lies foremost in understanding the development of network structures around a new concept that has diffused in scientific communities and also outside them. The network means both networks of researchers, networks of publications and networks of concepts that describe the research field. The emphasis has been on the digital environment and onthe socalled information society that we are now living in, but in this transitional stage, the printed publications are still important and widely used in social sciences and humanities. The network formation is affected by social relations and informal contacts that push new ideas. This study also gives new information about using different research methods, like bibliometric methods supported by interviews and content analyses. It is evident that interpretation of bibliometric maps presupposes qualitative information and understanding of the phenomena under study.
Resumo:
This thesis is a study of articles published in scientific journals about working capital management using bibliometric methods. The study was restricted to articles published in 1990–2010 that deal with the whole working capital management topic not a single sub-area of it. Working capital is defined as current assets minus current liabilities; sometimes also a definition of inventory plus accounts receivable minus accounts payable is used. The data was retrieved from the databases ISI Web of Science and Sciverse Scopus. Articles about working capital management were found 23. Content analysis, statistical analysis and citation analysis was performed to the articles. The most cited articles found in citation analysis were also analyzed by nearly same methods. This study found that scientific research of working capital management seems not to be concentrated to specific persons, organizations or journals. The originality and novelty in many articles is low. Many articles studied relation between working capital management and profitability in firms or working capital management practices of firms using statistical analyses. Data in articles was firms of all sizes, except in developing economies only big firms were used. Interesting areas for future research could be surveys made about working capital management practices in firms, finding of best practices, tools for working capital management, inventing or improving alternative views to working capital management like process-oriented view and firm or industry specific studies.
Resumo:
Integrum-aineistokoulutuksen 28.9. - 29.9.2011 materiaali
Resumo:
Integrum-aineistokoulutuksen 28.9. - 29.9.2011 koulutusmateriaali
Resumo:
Integrum-aineistokoulutuksen 28.9. - 29.9.2011 koulutusmateriaalia
Resumo:
Integrum-aineistokoulutuksen 28.9. - 29.9.2011 koulutusmateriaalia
Resumo:
Tutkimuksen tarkoituksena on esitellä keskustelupalstoja simuloivan korttipelin ”Off Topicin” suunnitteluprosessi pelinkehittäjien näkökulmasta. Tutkimuksen keskiössä on pelitestaus. Miten peliä testattiin, millainen vaikutus pelitestauksilla oli ja miten testaukset muuttivat peliä. Tutkimusaineistona toimivat pelisuunnitteluprosessin aikana syntyneet dokumentit, muistinpanot, pelaajapalautteet, prototyypit sekä tuotantomateriaalit. Off Topic on Turun yliopiston digitaalisen kulttuurin oppiaineen pelijulkaisusarjan ensimmäinen tuote. Tutkimus on luonteeltaan soveltava, sillä se sisältää kirjallisen osion lisäksi myös 504 kappaleen painoksen kyseisestä korttipelistä. Keskeisin tutkimustulos on pelisuunnittelun vaiheiden kuvaaminen mahdollisimman tarkasti siihen liittyvien elementtien vuorovaikutuksen kautta. Peliä suunniteltiin testausten, prototyyppien ja niihin tehtyjen muutosten syklinä, johon vaikuttivat lisäksi meidän pelisuunnittelijoiden omat mielenkiinnonkohteet ja aiemmat kokemukset pelisuunnittelun alalta. Lisäksi tutkimus käsittelee ideoiden syntymistä sekä ongelmaratkaisua etenkin testausten ja luovan ajattelun kautta. Tutkimuksen kautta pystyy näkemään pelisuunnittelun kehityskaaren, joka johti valmiin tuotteen syntymiseen. Tutkimuksessa kulkevat mukana niin epäonnistuneet kuin keskeneräisetkin kokeilut, mutta myös ne ahaa-elämykset, jotka johtivat valmiin pelimekaniikan syntymiseen. Prosessin kuvaaminen on tärkeää senkin vuoksi, että tulevissa peliprojekteissa on mahdollista ottaa näistä oppia. Tutkimus myös tarjoaa yhden mallin siitä miten peliprojekti voi edetä ja mitä vaiheita se pitää sisällään.
Resumo:
Diplomityön tavoitteena on suunnitella ja toteuttaa tehokas sisälogistinen ratkaisu. Työssä tutkitaan varastonhallintaa ja materiaalivirtojen ohjausta sekä näihin kuuluvia prosesseja. Myös toimintaa pyritään tehostamaan. Työssä käytetään konstruktiivista tutkimusotetta, jossa olevassa olevaa ongelmaa lähdetään ratkaisemaan teorian avulla, jonka jälkeen kehitetään ratkaisumalli. Työn aineistona on käytetty aiheeseen liittyvää kirjallisuutta, tieteellisiä artikkeleita, kohdeyrityksen tietokantoja sekä työntekijöiden kanssa käytyjä keskusteluja. Diplomityön seurauksena yrityksellä on käytössään nykyaikainen ja tehokas sisälogistinen järjestelmä, sekä toiminta on yrityksen tahtotilan mukaisesti läpinäkyvämpää. Varastoon sitoutuva pääoma pienenee sekä materiaalin käsittely tehostuu huomattavasti. Varastotoimintojen odotetaan tehostuvan yli 40 % nykyiseen verrattuna.
Resumo:
The goal of this study was to explore and understand the definition of technical debt. Technical debt refers to situation in a software development, where shortcuts or workarounds are taken in technical decision. However, the original definition has been applied to other parts of software development and it is currently difficult to define technical debt. We used mapping study process as a research methodology to collect literature related to the research topic. We collected 159 papers that referred to original definition of technical debt, which were retrieved from scientific literature databases to conduct the search process. We retrieved 107 definitions that were split into keywords. The keyword map is one of the main results of this work. Apart from that, resulting synonyms and different types of technical debt were analyzed and added to the map as branches. Overall, 33 keywords or phrases, 6 synonyms and 17 types of technical debt were distinguished.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.