91 resultados para Online databases
em CentAUR: Central Archive University of Reading - UK
Resumo:
Publication rate of patents can be a useful measure of innovation and productivity in fields of science and technology. To assess the growth in industrially-important research, I conducted an appraisal of patents published between 1985 and 2005 on online databases using keywords chosen to select technologies arising as a result of biological inspiration. Whilst the total number of patents increased over the period examined, those with biomimetic content had increased faster as a proportion of total patent publications. Logistic regression analysis reveals that we may be a little over half way through an initial innovation cycle inspired by biological systems.
Resumo:
This article is concerned with the risks associated with the monopolisation of information that is available from a single source only. Although there is a longstanding consensus that sole-source databases should not receive protection under the EU Database Directive, and there are legislative provisions to ensure that lawful users have access to a database’s contents, Ryanair v PR Aviation challenges this assumption by affirming that the use of non-protected databases can be restricted by contract. Owners of non-protected databases can contractually exclude lawful users from taking the benefit of statutorily permitted uses, because such databases are not covered from the legislation that declares this kind of contract null and void. We argue that this judgment is not consistent with the legislative history and can have a profound impact on the functioning of the digital single market, where new information services, such as meta-search engines or price-comparison websites, base their operation on the systematic extraction and re-utilisation of materials available from online sources. This is an issue that the Commission should address in a forthcoming evaluation of the Database Directive.
Resumo:
Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products.
Resumo:
Soil data and reliable soil maps are imperative for environmental management. conservation and policy. Data from historical point surveys, e.g. experiment site data and farmers fields can serve this purpose. However, legacy soil information is not necessarily collected for spatial analysis and mapping such that the data may not have immediately useful geo-references. Methods are required to utilise these historical soil databases so that we can produce quantitative maps of soil propel-ties to assess spatial and temporal trends but also to assess where future sampling is required. This paper discusses two such databases: the Representative Soil Sampling Scheme which has monitored the agricultural soil in England and Wales from 1969 to 2003 (between 400 and 900 bulked soil samples were taken annually from different agricultural fields); and the former State Chemistry Laboratory, Victoria, Australia where between 1973 and 1994 approximately 80,000 soil samples were submitted for analysis by farmers. Previous statistical analyses have been performed using administrative regions (with sharp boundaries) for both databases, which are largely unrelated to natural features. For a more detailed spatial analysis that call be linked to climate and terrain attributes, gradual variation of these soil properties should be described. Geostatistical techniques such as ordinary kriging are suited to this. This paper describes the format of the databases and initial approaches as to how they can be used for digital soil mapping. For this paper we have selected soil pH to illustrate the analyses for both databases.
Resumo:
Facilitating the visual exploration of scientific data has received increasing attention in the past decade or so. Especially in life science related application areas the amount of available data has grown at a breath taking pace. In this paper we describe an approach that allows for visual inspection of large collections of molecular compounds. In contrast to classical visualizations of such spaces we incorporate a specific focus of analysis, for example the outcome of a biological experiment such as high throughout screening results. The presented method uses this experimental data to select molecular fragments of the underlying molecules that have interesting properties and uses the resulting space to generate a two dimensional map based on a singular value decomposition algorithm and a self organizing map. Experiments on real datasets show that the resulting visual landscape groups molecules of similar chemical properties in densely connected regions.
Resumo:
For those few readers who do not know, CAFS is a system developed by ICL to search through data at speeds of several million characters per second. Its full name is Content Addressable File Store Information Search Processor, CAFS-ISP or CAFS for short. It is an intelligent hardware-based searching engine, currently available with both ICL's 2966 family of computers and the recently announced Series 39, operating within the VME environment. It uses content addressing techniques to perform fast searches of data or text stored on discs: almost all fields are equally accessible as search keys. Software in the mainframe generates a search task; the CAFS hardware performs the search, and returns the hit records to the mainframe. Because special hardware is used, the searching process is very much more efficient than searching performed by any software method. Various software interfaces are available which allow CAFS to be used in many different situations. CAFS can be used with existing systems without significant change. It can be used to make online enquiries of mainframe files or databases or directly from user written high level language programs. These interfaces are outlined in the body of the report.
Research skills audit tool: An online resource to map research skills within undergraduate curricula
Resumo:
The Euro-Mediterranean region is an important centre for the diversity of crop wild relatives. Crops, such as oats (Avena sativa), sugar beet (Beta vulgaris), apple (Malus domestica), annual meadow grass (Festuca pratensis), white clover (Trifolium repens), arnica (Arnica montana), asparagus (Asparagus officinalis), lettuce (Lactuca sativa), and sage (Salvia officinalis) etc., all have wild relatives in the region. The European Community funded project, PGR Forum (www.pgrforum.org) is building an online information system to provide access to crop wild relative data to a broad user community; including plant breeders, protected area managers, policy-makers, conservationists, taxonomists and the wider public. The system will include data on uses, geographical distribution, biology, population and habitat information, threats (including IUCN Red List assessments) and conservation actions. This information is vital for the continued sustainable utilisation and conservation of crop wild relatives. Two major databases have been utilised as the backbone to a Euro-Mediterranean crop wild relative catalogue, which forms the core of the information system: Euro+Med PlantBase (www.euromed.org.uk) and Mansfeld’s World Database of Agricultural and Horticultural Crops (http://mansfeld.ipk-gatersleben.de). By matching the genera found within the two databases, a preliminary list of crop wild relatives has been produced. Around 20,000 of the 30,000+ species listed in Euro+Med PlantBase can be considered crop wild relatives, i.e. those species found within the same genus as a crop. The list is currently being refined by implementing a priority ranking system based on the degree of relatedness of taxa to the associated crop.
Resumo:
There is a concerted global effort to digitize biodiversity occurrence data from herbarium and museum collections that together offer an unparalleled archive of life on Earth over the past few centuries. The Global Biodiversity Information Facility provides the largest single gateway to these data. Since 2004 it has provided a single point of access to specimen data from databases of biological surveys and collections. Biologists now have rapid access to more than 120 million observations, for use in many biological analyses. We investigate the quality and coverage of data digitally available, from the perspective of a biologist seeking distribution data for spatial analysis on a global scale. We present an example of automatic verification of geographic data using distributions from the International Legume Database and Information Service to test empirically, issues of geographic coverage and accuracy. There are over 1/2 million records covering 31% of all Legume species, and 84% of these records pass geographic validation. These data are not yet a global biodiversity resource for all species, or all countries. A user will encounter many biases and gaps in these data which should be understood before data are used or analyzed. The data are notably deficient in many of the world's biodiversity hotspots. The deficiencies in data coverage can be resolved by an increased application of resources to digitize and publish data throughout these most diverse regions. But in the push to provide ever more data online, we should not forget that consistent data quality is of paramount importance if the data are to be useful in capturing a meaningful picture of life on Earth.