40 resultados para Open Information Extraction
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Presentation at the "Tutkimus vapaaksi verkkoon!" seminar in Helsinki, January 25, 2011
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
Tutkielman tarkoituksena on tutkia case-organisaationa toimivan Itäisen tullipiirin strategista uudistumiskykyä. Minkälaiset lähtökohdat organisaatiolla on kohdata tulevaisuuden haasteet omassa toimintaympäristössään ja minkälaisia esteitä uudistumiselle löytyy? Pärjätäkseen kiristyvässä kilpailussa on uudistumiseen vaikuttavien tekijöiden, kuten osaamisen, tiedon kulun, johtamisen ja suhteiden tunnistaminen ja hyödyntäminen ensiarvoisen tärkeää myös julkishallinnon organisaatioille. Tässä tutkielmassa uudistumiskykyä tarkastellaan kolmiulotteisen organisaatiomallin (mekaaninen, orgaaninen ja dynaaminen) valossa ja kehittämistoimien lähtökohtana pidetään organisaation omaa strategista fokusta. Tutkimus- ja tiedonkeruumenetelminä käytetään kvantitatiiviseksi luokiteltavaa, sähköisessä muodossa tehtävää KM-factor -kyselyä ja kvalitatiivista teemahaastattelua. Tutkimustulokset antavat strategisesti tärkeää tietoa case-organisaation nykytilasta; sen heikkouksista ja vahvuuksista. Tulosten perusteella organisaation toimintatapa on melko yhtenäinen ja strategisen fokuksensa, eli orgaanisen toimintaympäristön vaatimusten mukainen. Kehittämistoimia tulee kuitenkin kohdentaa erityisesti henkilöstön strategian mukaisen osaamisen ja esimiesten tiedon kulun lisäämiseen sekä yleisesti työmotivaatiotason nostamiseen koko kohdeorganisaatiossa. Organisaatiossa on luotava käytäntöjä, jotka tukevat avoimen tiedon kulun ilmapiiriä ja dialogimaista kommunikointia, jotta organisaation uudistumiskyky yhtenäisenä systeeminä parantuisi entisestään.
Resumo:
The present dissertation examined reading development during elementary school years by means of eye movement tracking. Three different but related issues in this field were assessed. First of all, the development of parafoveal processing skills in reading was investigated. Second, it was assessed whether and to what extent sublexical units such as syllables and morphemes are used in processing Finnish words and whether the use of these sublexical units changes as a function of reading proficiency. Finally, the developmental trend in the speed of visual information extraction during reading was examined. With regard to parafoveal processing skills, it was shown that 2nd graders extract letter identity information approx. 5 characters to the right of fixation, 4th graders approx. 7 characters to the right of fixation, and 6th graders and adults approx. 9 characters to the right of fixation. Furthermore, it was shown that all age groups extract more parafoveal information within compound words than across adjectivenoun pairs of similar length. In compounds, parafoveal word information can be extracted in parallel with foveal word information, if the compound in question is of high frequency. With regard to the use of sublexical units in Finnish word processing, it was shown that less proficient 2nd graders use both syllables and morphemes in the course of lexical access. More proficient 2nd graders as well as older readers seem to process words more holistically. Finally, it was shown that 60 ms is enough for 4th graders and adults to extract visual information from both 4-letter and 8-letter words, whereas 2nd graders clearly needed more than 60 ms to extract all information from 8- letter words for processing to proceed smoothly. The present dissertation demonstrates that Finnish 2nd graders develop their reading skills rapidly and are already at an adult level in some aspects of reading. This is not to say that there are no differences between less proficient (e.g., 2nd graders) and more proficient readers (e.g., adults) but in some respects it seems that the visual system used in extracting information from the text is matured by the 2nd grade. Furthermore, the present dissertation demonstrates that the allocation of attention in reading depends much on textual properties such as word frequency and whether words are spatially unified (as in compounds) or not. This flexibility of the attentional system naturally needs to be captured in word processing models. Finally, individual differences within age groups are quite substantial but it seems that by the end of the 2nd grade practically all Finnish children have reached a reasonable level of reading proficiency.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
A linear prediction procedure is one of the approved numerical methods of signal processing. In the field of optical spectroscopy it is used mainly for extrapolation known parts of an optical signal in order to obtain a longer one or deduce missing signal samples. The first is needed particularly when narrowing spectral lines for the purpose of spectral information extraction. In the present paper the coherent anti-Stokes Raman scattering (CARS) spectra were under investigation. The spectra were significantly distorted by the presence of nonlinear nonresonant background. In addition, line shapes were far from Gaussian/Lorentz profiles. To overcome these disadvantages the maximum entropy method (MEM) for phase spectrum retrieval was used. The obtained broad MEM spectra were further underwent the linear prediction analysis in order to be narrowed.
Resumo:
Tänä päivänä organisaatiot elävät lähestulkoon jatkuvan muutoksen aikaa. Jotta muutoksista selvitään menestyksekkäästi, tulisi työntekijöiden luottamus organisaatiota kohtaan säilyttää hyvänä myös muutosten keskellä. Muutosten läpivieminen on helpompaa, kun luottamus on vahvaa. Toisaalta muutokset haastavat luottamuksen. Tämän tutkielman tavoitteena on osallistua luottamuksen merkityksestä työyhteisössä käytävään tieteelliseen keskusteluun tuomalla esille lähiesimiehen roolia luottamuksen rakentajana sekä korostamalla muutostilanteiden tuomia erityispiirteitä luottamuksen rakentamiseen ja ylläpitämiseen. Tavoitteeseen pyrittiin analysoimalla Kelan Kymenlaakson vakuutuspiirin alueella työskentelevien 11 toimihenkilön ja 5 esimiehen teemahaastatteluista saatua laadullista aineistoa. Tutkimustulosten perusteella voidaan todeta, että luottamus työyhteisössä on tärkeää ja se korostuu entisestään muutostilanteissa. Lähiesimies on merkittävässä roolissa työyhteisön luottamuksen ylläpitäjänä ja rakentajana ja hän voi rakentaa työntekijän luottamusta paitsi itseään niin myös koko organisaatiota kohtaan. Muutostilanteissa esimies voi rakentaa luottamusta erityisesti oikea-aikaisella tiedottamisella ja viestinnällä, keskustelemalla, huomioimalla yksilön henkilökohtaiset tarpeet, avoimuudella sekä huolehtimalla työntekijän koulutuksesta ja osaamisen kehittämisestä. Avoin tiedottaminen ja keskusteluyhteys osapuolten välillä näyttävät siis olevan kaiken lähtökohta.
Resumo:
In this study we used market settlement prices of European call options on stock index futures to extract implied probability distribution function (PDF). The method used produces a PDF of returns of an underlying asset at expiration date from implied volatility smile. With this method, the assumption of lognormal distribution (Black-Scholes model) is tested. The market view of the asset price dynamics can then be used for various purposes (hedging, speculation). We used the so called smoothing approach for implied PDF extraction presented by Shimko (1993). In our analysis we obtained implied volatility smiles from index futures markets (S&P 500 and DAX indices) and standardized them. The method introduced by Breeden and Litzenberger (1978) was then used on PDF extraction. The results show significant deviations from the assumption of lognormal returns for S&P500 options while DAX options mostly fit the lognormal distribution. A deviant subjective view of PDF can be used to form a strategy as discussed in the last section.
Resumo:
Taking a realist view that law is one form of politics, this dissertation studies the roles of citizens and organizations in mobilizing the law to request government agencies to disclose environmental information in China, and during this process, how the socio-legal field interacts with the political-legal sphere, and what changes have been brought about during their interactions. This work takes a socio-legal approach and applies methodologies of social science and legal analysis. It aims to understand the paradox of why and how citizens and entities have been invoking the law to access environmental information despite the fact that various obstacles exist and the effectiveness of the new mechanism of environmental information disclosure still remains low. The study is largely based on the 28 cases and eight surveys of environmental information disclosure requests collected by the author. The cases and surveys analysed in this dissertation all occurred between May 2008, when the OGI Regulations and the OEI Measures came into effect, and August 2012 when the case collection was completed. The findings of this study have shown that by invoking the rules of law made by the authorities to demand government agencies disclosing environmental information, the public, including citizens, organizations, law firms, and the media, have strategically created a repercussive pressure upon the authorities to act according to the law. While it is a top-down process that has established the mechanism of open government information in China, it is indeed the bottom-up activism of the public that makes it work. Citizens and organizations’ use of legal tactics to push government agencies to disclose environmental information have formed not only an end of accessing the information but more a means of making government agencies accountable to their legal obligations. Law has thus played a pivotal role in enabling citizen participation in the political process. Against the current situation in China that political campaigns, or politicization, from general election to collective actions, especially contentious actions, are still restrained or even repressed by the government, legal mobilization, or judicialization, that citizens and organizations use legal tactics to demand their rights and push government agencies to enforce the law, become de facto an alternative of political participation. During this process, legal actions have helped to strengthen the civil society, make government agencies act according to law, push back the political boundaries, and induce changes in the relationship between the state and the public. In the field of environmental information disclosure, citizens and organizations have formed a bottom-up social activism, though limited in scope, using the language of law, creating progressive social, legal and political changes. This study emphasizes that it is partial and incomplete to understand China’s transition only from the top-down policy-making and government administration; it is also important to observe it from the bottom-up perspective that in a realistic view law can be part of politics and legal mobilization, even when utterly apolitical, can help to achieve political aims as well. This study of legal mobilization in the field of environmental information disclosure also helps us to better understand the function of law: law is not only a tool for the authorities to regulate and control, but inevitably also a weapon for the public to demand government agencies to work towards their obligations stipulated by the laws issued by themselves.
Resumo:
Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.
Resumo:
Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.
Resumo:
The patent system was created for the purpose of promoting innovation by granting the inventors a legally defined right to exclude others in return for public disclosure. Today, patents are being applied and granted in greater numbers than ever, particularly in new areas such as biotechnology and information andcommunications technology (ICT), in which research and development (R&D) investments are also high. At the same time, the patent system has been heavily criticized. It has been claimed that it discourages rather than encourages the introduction of new products and processes, particularly in areas that develop quickly, lack one-product-one-patent correlation, and in which theemergence of patent thickets is characteristic. A further concern, which is particularly acute in the U.S., is the granting of so-called 'bad patents', i.e. patents that do not factually fulfil the patentability criteria. From the perspective of technology-intensive companies, patents could,irrespective of the above, be described as the most significant intellectual property right (IPR), having the potential of being used to protect products and processes from imitation, to limit competitors' freedom-to-operate, to provide such freedom to the company in question, and to exchange ideas with others. In fact, patents define the boundaries of ownership in relation to certain technologies. They may be sold or licensed on their ownor they may be components of all sorts of technology acquisition and licensing arrangements. Moreover, with the possibility of patenting business-method inventions in the U.S., patents are becoming increasingly important for companies basing their businesses on services. The value of patents is dependent on the value of the invention it claims, and how it is commercialized. Thus, most of them are worth very little, and most inventions are not worth patenting: it may be possible to protect them in other ways, and the costs of protection may exceed the benefits. Moreover, instead of making all inventions proprietary and seeking to appropriate as highreturns on investments as possible through patent enforcement, it is sometimes better to allow some of them to be disseminated freely in order to maximize market penetration. In fact, the ideology of openness is well established in the software sector, which has been the breeding ground for the open-source movement, for instance. Furthermore, industries, such as ICT, that benefit from network effects do not shun the idea of setting open standards or opening up their proprietary interfaces to allow everyone todesign products and services that are interoperable with theirs. The problem is that even though patents do not, strictly speaking, prevent access to protected technologies, they have the potential of doing so, and conflicts of interest are not rare. The primary aim of this dissertation is to increase understanding of the dynamics and controversies of the U.S. and European patent systems, with the focus on the ICT sector. The study consists of three parts. The first part introduces the research topic and the overall results of the dissertation. The second part comprises a publication in which academic, political, legal and business developments that concern software and business-method patents are investigated, and contentiousareas are identified. The third part examines the problems with patents and open standards both of which carry significant economic weight inthe ICT sector. Here, the focus is on so-called submarine patents, i.e. patentsthat remain unnoticed during the standardization process and then emerge after the standard has been set. The factors that contribute to the problems are documented and the practical and juridical options for alleviating them are assessed. In total, the dissertation provides a good overview of the challenges and pressures for change the patent system is facing,and of how these challenges are reflected in standard setting.
Resumo:
1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.
Resumo:
The purpose of this thesis is to investigate projects funded in European 7th framework Information and Communication Technology- work programme. The research has been limited to issue ”Pervasive and trusted network and service infrastructure” and the aim is to find out which are the most important topics into which research will concentrate in the future. The thesis will provide important information for the Department of Information Technology in Lappeenranta University of Technology. First in this thesis will be investigated what are the requirements for the projects which were funded in “Pervasive and trusted network and service infrastructure” – programme 2007. Second the projects funded according to “Pervasive and trusted network and service infrastructure”-programme will be listed in to tables and the most important keywords will be gathered. Finally according to the keyword appearances the vision of the most important future topics will be defined. According to keyword-analysis the wireless networks are in important role in the future and core networks will be implemented with fiber technology to ensure fast data transfer. Software development favors Service Oriented Architecture (SOA) and open source solutions. The interoperability and ensuring the privacy are in key role in the future. 3D in all forms and content delivery are important topics as well. When all the projects were compared, the most important issue was discovered to be SOA which leads the way to cloud computing.