909 resultados para Keys to Database Searching
Resumo:
Vegetation maps and bioclimatic zone classifications communicate the vegetation of an area and are used to explain how the environment regulates the occurrence of plants on large scales. Many practises and methods for dividing the world’s vegetation into smaller entities have been presented. Climatic parameters, floristic characteristics, or edaphic features have been relied upon as decisive factors, and plant species have been used as indicators for vegetation types or zones. Systems depicting vegetation patterns that mainly reflect climatic variation are termed ‘bioclimatic’ vegetation maps. Based on these it has been judged logical to deduce that plants moved between corresponding bioclimatic areas should thrive in the target location, whereas plants moved from a different zone should languish. This principle is routinely applied in forestry and horticulture but actual tests of the validity of bioclimatic maps in this sense seem scanty. In this study I tested the Finnish bioclimatic vegetation zone system (BZS). Relying on the plant collection of Helsinki University Botanic Garden’s Kumpula collection, which according to the BZS is situated at the northern limit of the hemiboreal zone, I aimed to test how the plants’ survival depends on their provenance. My expectation was that plants from the hemiboreal or southern boreal zones should do best in Kumpula, whereas plants from more southern and more northern zones should show progressively lower survival probabilities. I estimated probability of survival using collection database information of plant accessions of known wild origin grown in Kumpula since the mid 1990s, and logistic regression models. The total number of accessions I included in the analyses was 494. Because of problems with some accessions I chose to separately analyse a subset of the complete data, which included 379 accessions. I also analysed different growth forms separately in order to identify differences in probability of survival due to different life strategies. In most analyses accessions of temperate and hemiarctic origin showed lower survival probability than those originating from any of the boreal subzones, which among them exhibited rather evenly high probabilities. Exceptionally mild and wet winters during the study period may have killed off hemiarctic plants. Some winters may have been too harsh for temperate accessions. Trees behaved differently: they showed an almost steadily increasing survival probability from temperate to northern boreal origins. Various factors that could not be controlled for may have affected the results, some of which were difficult to interpret. This was the case in particular with herbs, for which the reliability of the analysis suffered because of difficulties in managing their curatorial data. In all, the results gave some support to the BZS, and especially its hierarchical zonation. However, I question the validity of the formulation of the hypothesis I tested since it may not be entirely justified by the BZS, which was designed for intercontinental comparison of vegetation zones, but not specifically for transcontinental provenance trials. I conclude that botanic gardens should pay due attention to information management and curational practices to ensure the widest possible applicability of their plant collections.
Resumo:
Use of adverse drug combinations, abuse of medicinal drugs and substance abuse are considerable social problems that are difficult to study. Prescription database studies might fail to incorporate factors like use of over-the-counter drugs and patient compliance, and spontaneous reporting databases suffer from underreporting. Substance abuse and smoking studies might be impeded by poor participation activity and reliability. The Forensic Toxicology Unit at the University of Helsinki is the only laboratory in Finland that performs forensic toxicology related to cause-of-death investigations comprising the analysis of over 6000 medico-legal cases yearly. The analysis repertoire covers most commonly used drugs and drugs of abuse, and the ensuing database contains also background information and information extracted from the final death certificate. In this thesis, the data stored in this comprehensive post-mortem toxicology database was combined with additional metabolite and genotype analyses that were performed to complete the profile of selected cases. The incidence of drug combinations possessing serious adverse drug interactions was generally low (0.71%), but it was notable for the two individually studied drugs, a common anticoagulant warfarin (33%) and a new generation antidepressant venlafaxine (46%). Serotonin toxicity and adverse cardiovascular effects were the most prominent possible adverse outcomes. However, the specific role of the suspected adverse drug combinations was rarely recognized in the death certificates. The frequency of bleeds was observed to be elevated when paracetamol and warfarin were used concomitantly. Pharmacogenetic factors did not play a major role in fatalities related to venlafaxine, but the presence of interacting drugs was more common in cases showing high venlafaxine concentrations. Nicotine findings in deceased young adults were roughly three times more prevalent than the smoking frequency estimation of living population. Contrary to previous studies, no difference in the proportion of suicides was observed between nicotine users and non-nicotine users. However, findings of abused substances, including abused prescription drugs, were more common in the nicotine users group than in the non-nicotine users group. The results of the thesis are important for forensic and clinical medicine, as well as for public health. The possibility of drug interactions and pharmacogenetic issues should be taken into account in cause-of-death investigations, especially in unclear cases, medical malpractice suspicions and cases where toxicological findings are scarce. Post-mortem toxicological epidemiology is a new field of research that can help to reveal problems in drug use and prescription practises.
Resumo:
The paper deals with a model-theoretic approach to clustering. The approach can be used to generate cluster description based on knowledge alone. Such a process of generating descriptions would be extremely useful in clustering partially specified objects. A natural byproduct of the proposed approach is that missing values of attributes of an object can be estimated with ease in a meaningful fashion. An important feature of the approach is that noisy objects can be detected effectively, leading to the formation of natural groups. The proposed algorithm is applied to a library database consisting of a collection of books.
Resumo:
Determining the sequence of amino acid residues in a heteropolymer chain of a protein with a given conformation is a discrete combinatorial problem that is not generally amenable for gradient-based continuous optimization algorithms. In this paper we present a new approach to this problem using continuous models. In this modeling, continuous "state functions" are proposed to designate the type of each residue in the chain. Such a continuous model helps define a continuous sequence space in which a chosen criterion is optimized to find the most appropriate sequence. Searching a continuous sequence space using a deterministic optimization algorithm makes it possible to find the optimal sequences with much less computation than many other approaches. The computational efficiency of this method is further improved by combining it with a graph spectral method, which explicitly takes into account the topology of the desired conformation and also helps make the combined method more robust. The continuous modeling used here appears to have additional advantages in mimicking the folding pathways and in creating the energy landscapes that help find sequences with high stability and kinetic accessibility. To illustrate the new approach, a widely used simplifying assumption is made by considering only two types of residues: hydrophobic (H) and polar (P). Self-avoiding compact lattice models are used to validate the method with known results in the literature and data that can be practically obtained by exhaustive enumeration on a desktop computer. We also present examples of sequence design for the HP models of some real proteins, which are solved in less than five minutes on a single-processor desktop computer Some open issues and future extensions are noted.
Resumo:
We present WebGeSTer DB, the largest database of intrinsic transcription terminators (http://pallab.serc.iisc.ernet.in/gester). The database comprises of a million terminators identified in 1060 bacterial genome sequences and 798 plasmids. Users can obtain both graphic and tabular results on putative terminators based on default or user-defined parameters. The results are arranged in different tiers to facilitate retrieval, as per the specific requirements. An interactive map has been incorporated to visualize the distribution of terminators across the whole genome. Analysis of the results, both at the whole-genome level and with respect to terminators downstream of specific genes, offers insight into the prevalence of canonical and non-canonical terminators across different phyla. The data in the database reinforce the paradigm that intrinsic termination is a conserved and efficient regulatory mechanism in bacteria. Our database is freely accessible.
Resumo:
Purpose - There are many library automation packages available as open-source software, comprising two modules: staff-client module and online public access catalogue (OPAC). Although the OPAC of these library automation packages provides advanced features of searching and retrieval of bibliographic records, none of them facilitate full-text searching. Most of the available open-source digital library software facilitates indexing and searching of full-text documents in different formats. This paper makes an effort to enable full-text search features in the widely used open-source library automation package Koha, by integrating it with two open-source digital library software packages, Greenstone Digital Library Software (GSDL) and Fedora Generic Search Service (FGSS), independently. Design/methodology/approach - The implementation is done by making use of the Search and Retrieval by URL (SRU) feature available in Koha, GSDL and FGSS. The full-text documents are indexed both in Koha and GSDL and FGSS. Findings - Full-text searching capability in Koha is achieved by integrating either GSDL or FGSS into Koha and by passing an SRU request to GSDL or FGSS from Koha. The full-text documents are indexed both in the library automation package (Koha) and digital library software (GSDL, FGSS) Originality/value - This is the first implementation enabling the full-text search feature in a library automation software by integrating it into digital library software.
Resumo:
CDS/ISIS, an advanced non-numerical information storage and retrieval software was developed by UNESCO. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Libraries and information providers are taking advantage of these Internet developments to provide access to their resources/information on the Web. A number of tools are now available for publishing CDS/ISIS databases on the Internet. One such tool is the WWWISIS Web gateway software, developed by BIREME, Brazil. This paper illustrates porting of sample records from a bibliographic database into CDS/ISIS, and then publishing this database on the Internet using WWWISIS.
Resumo:
With the emergence of Internet, the global connectivity of computers has become a reality. Internet has progressed to provide many user-friendly tools like Gopher, WAIS, WWW etc. for information publishing and access. The WWW, which integrates all other access tools, also provides a very convenient means for publishing and accessing multimedia and hypertext linked documents stored in computers spread across the world. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Once the information is published on the Web, a user can access this information from any part of the world. A Web browser like Netscape or Internet Explorer is used as a common user interface for accessing information/databases. This will greatly relieve a user from learning the search syntax of individual information systems. Libraries are taking advantage of these developments to provide access to their resources on the Web. CDS/ISIS is a very popular bibliographic information management software used in India. In this tutorial we present details of integrating CDS/ISIS with the WWW. A number of tools are now available for making CDS/ISIS database accessible on the Internet/Web. Some of these are 1) the WAIS_ISIS Server. 2) the WWWISIS Server 3) the IQUERY Server. In this tutorial, we have explained in detail the steps involved in providing Web access to an existing CDS/ISIS database using the freely available software, WWWISIS. This software is developed, maintained and distributed by BIREME, the Latin American & Caribbean Centre on Health Sciences Information. WWWISIS acts as a server for CDS/ISIS databases in a WWW client/server environment. It supports functions for searching, formatting and data entry operations over CDS/ISIS databases. WWWISIS is available for various operating systems. We have tested this software on Windows '95, Windows NT and Red Hat Linux release 5.2 (Appolo) Kernel 2. 0. 36 on an i686. The testing was carried out using IISc's main library's OPAC containing more than 80,000 records and Current Contents issues (bibliographic data) containing more than 25,000 records. WWWISIS is fully compatible with CDS/ISIS 3.07 file structure. However, on a system running Unix or its variant, there is no guarantee of this compatibility. It is therefore safe to recreate the master and the inverted files, using utilities provided by BIREME, under Unix environment.
Resumo:
The ability to metabolize aromatic beta-glucosides such as salicin and arbutin varies among members of the Enterobacteriaceae. The ability of Escherichia coli to degrade salicin and arbutin appears to be cryptic, subject to activation of the bgl genes, whereas many members of the Klebsiella genus can metabolize these sugars. We have examined the genetic basis for beta-glucoside utilization in Klebsiella aerogenes. The Klebsiella equivalents of bglG, bglB and bglR have been cloned using the genome sequence database of Klebsiella pneumoniae. Nucleotide sequencing shows that the K. aerogenes bgl genes show substantial similarities to the E. coli counterparts. The K. aerogenes bgl genes in multiple copies can also complement E. coli mutants deficient in bglG encoding the antiterminator and bglB encoding the phospho-beta-glucosidase, suggesting that they are functional homologues. The regulatory region bglR of K aerogenes shows a high degree of similarity of the sequences involved in BglG-mediated regulation. Interestingly, the regions corresponding to the negative elements present in the E. coli regulatory region show substantial divergence in K aerogenes. The possible evolutionary implications of the results are discussed. (C) 2003 Federation of European Microbiological Societies. Published by Elsevier Science B.v. All rights reserved.
Resumo:
The paper describes a modular, unit selection based TTS framework, which can be used as a research bed for developing TTS in any new language, as well as studying the effect of changing any parameter during synthesis. Using this framework, TTS has been developed for Tamil. Synthesis database consists of 1027 phonetically rich prerecorded sentences. This framework has already been tested for Kannada. Our TTS synthesizes intelligible and acceptably natural speech, as supported by high mean opinion scores. The framework is further optimized to suit embedded applications like mobiles and PDAs. We compressed the synthesis speech database with standard speech compression algorithms used in commercial GSM phones and evaluated the quality of the resultant synthesized sentences. Even with a highly compressed database, the synthesized output is perceptually close to that with uncompressed database. Through experiments, we explored the ambiguities in human perception when listening to Tamil phones and syllables uttered in isolation,thus proposing to exploit the misperception to substitute for missing phone contexts in the database. Listening experiments have been conducted on sentences synthesized by deliberately replacing phones with their confused ones.
Resumo:
The INFORMATION SYSTEM with user friendly GUI’s (Graphical user Interface) is developed to maintain the flora data and generate reports for Sharavathi River Basin. The database consists of the information related to trees, herbs, shrubs and climbers. The data is based on the primary field survey and the information available in flora of Shimoga, Karnataka and Hassan flora. User friendly query options based on dichotomous keys are provided to help user to retrieve the data while data entry options aid in updating and editing the database at family, genus and species levels.
Resumo:
This paper addresses the problem of secure path key establishment in wireless sensor networks that uses the random key pre-distribution technique. Inspired by the recent proxy-based scheme in the work of Ling and Znati (2005) and Li et al. (2005), we introduce a friend-based scheme for establishing pairwise keys securely. We show that the chances of finding friends in a neighbourhood are considerably more than that of finding proxies, leading to lower communication overhead. Further, we prove that the friend-based scheme performs better than the proxy-based scheme both in terms of resilience against node capture as well as in energy consumption for pairwise key establishment, making our scheme more feasible.
Resumo:
The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.
Resumo:
This paper describes the efforts at MILE lab, IISc, to create a 100,000-word database each in Kannada and Tamil for the design and development of Online Handwritten Recognition. It has been collected from over 600 users in order to capture the variations in writing style. We describe features of the scripts and how the number of symbols were reduced to be able to effectively train the data for recognition. The list of words include all the characters, Kannada and Indo-Arabic numerals, punctuations and other symbols. A semi-automated tool for the annotation of data from stroke to word level is used. It segments each word into stroke groups and also acts as a validation mechanism for segmentation. The tool displays the stroke, stroke groups and aksharas of a word and hence can be used to study the various styles of writing, delayed strokes and for assigning quality tags to the words. The tool is currently being used for annotating Tamil and Kannada data. The output is stored in a standard XML format.
Resumo:
This paper presents the preliminary analysis of Kannada WordNet and the set of relevant computational tools. Although the design has been inspired by the famous English WordNet, and to certain extent, by the Hindi WordNet, the unique features of Kannada WordNet are graded antonyms and meronymy relationships, nominal as well as verbal compoundings, complex verb constructions and efficient underlying database design (designed to handle storage and display of Kannada unicode characters). Kannada WordNet would not only add to the sparse collection of machine-readable Kannada dictionaries, but also will give new insights into the Kannada vocabulary. It provides sufficient interface for applications involved in Kannada machine translation, spell checker and semantic analyser.