880 resultados para library databases
Resumo:
Objective: From Census data, to document the distribution of general practitioners in Australia and to estimate the number of general practitioners needed to achieve an equitable distribution accounting for community health need. Methods: Data on location of general practitioners, population size and crude mortality by statistical division (SD) were obtained from the Australian Bureau of Statistics. The number of patients per general practitioner by SD was calculated and plotted. Using crude mortality to estimate community health need, a ratio of the number of general practitioners per person:mortality was calculated for all Australia and for each SD (the Robin Hood Index). From this, the number of general practitioners needed to achieve equity was calculated. Results: In all, 26,290 general practitioners were identified in 57 SDs. The mean number of people per general practitioner is 707, ranging from 551 to 1887. Capital city SDs have most favourable ratios. The Robin Hood Index for Australia is 1, and ranges from 0.32 (relatively under-served) to 2.46 (relatively over-served). Twelve SDs (21%) including all capital cities and 65% of all Australians, have a Robin Hood Index > 1. To achieve equity per capita 2489 more general practitioners (10% of the current workforce) are needed. To achieve equity by the Robin Hood Index 3351 (13% of the current workforce) are needed. Conclusions: The distribution of general practitioners in Australia is skewed. Nonmetropolitan areas are relatively underserved. Census data and the Robin Hood Index could provide a simple means of identifying areas of need in Australia.
Resumo:
We have isolated a family of insect-selective neurotoxins from the venom of the Australian funnel-web spider that appear to be good candidates for biopesticide engineering. These peptides, which we have named the Janus-faced atracotoxins (J-ACTXs), each contain 36 or 37 residues, with four disulfide bridges, and they show no homology to any sequences in the protein/DNA databases. The three-dimensional structure of one of these toxins reveals an extremely rare vicinal disulfide bridge that we demonstrate to be critical for insecticidal activity. We propose that J-ACTX comprises an ancestral protein fold that we refer to as the disulfide-directed beta-hairpin.
Resumo:
An extensive research program focused on the characterization of various metallurgical complex smelting and coal combustion slags is being undertaken. The research combines both experimental and thermodynamic modeling studies. The approach is illustrated by work on the PbO-ZnO-Al2O3-FeO-Fe2O3-CaO-SiO2 system. Experimental measurements of the liquidus and solidus have been undertaken under oxidizing and reducing conditions using equilibration, quenching, and electron probe X-ray microanalysis. The experimental program has been planned so as to obtain data for thermodynamic model development as well as for pseudo-ternary Liquidus diagrams that can be used directly by process operators. Thermodynamic modeling has been carried out using the computer system FACT, which contains thermodynamic databases with over 5000 compounds and evaluated solution models. The FACT package is used for the calculation of multiphase equilibria in multicomponent systems of industrial interest. A modified quasi-chemical solution model is used for the liquid slag phase. New optimizations have been carried out, which significantly improve the accuracy of the thermodynamic models for lead/zinc smelting and coal combustion processes. Examples of experimentally determined and calculated liquidus diagrams are presented. These examples provide information of direct relevance to various metallurgical smelting and coal combustion processes.
Resumo:
Some diverse indicators used to measure the innovation process are considered, They include those with art aggregate, and often national, focus, and rely on data from scientific publications, patents and R&D expenditures, etc. Others have a firm-level perspective, relying primarily on surveys or case studies. Also included are indicators derived from specialized databases, or consensual agreements reached through foresight exercises. There is an obvious need for greater integration of the various approaches to capture move effectively the richness of available data and better reflect the reality of innovation. The focus for such integration could be in the area of technology strategy, which integrates the diverse scientific, technological, and innovation activities of firms within their operating environments; improved capacity to measure it has implications for policy-makers, managers and researchers.
Resumo:
The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.
Resumo:
The explosive growth in biotechnology combined with major advancesin information technology has the potential to radically transformimmunology in the postgenomics era. Not only do we now have readyaccess to vast quantities of existing data, but new data with relevanceto immunology are being accumulated at an exponential rate. Resourcesfor computational immunology include biological databases and methodsfor data extraction, comparison, analysis and interpretation. Publiclyaccessible biological databases of relevance to immunologists numberin the hundreds and are growing daily. The ability to efficientlyextract and analyse information from these databases is vital forefficient immunology research. Most importantly, a new generationof computational immunology tools enables modelling of peptide transportby the transporter associated with antigen processing (TAP), modellingof antibody binding sites, identification of allergenic motifs andmodelling of T-cell receptor serial triggering.
Resumo:
There has been a debate on whether or not the incidence of schizophrenia varies across time and place. In order to optimise the evidence upon which this debate is based, we have undertaken a systematicsystematic review of the literature. In this paper we provide an overview of the methods of the review and a preliminary analysis of the studies identified to date. Electronic databases (Medline, Psychlnfo, Embase, LILAC) were systematically searched for articles published between January 1965 and December 2001. The search terms were: (schizo* OR psycho*)AND (incidence OR prevalence). References were also identified from review articles, reference list and by writing to authors. To date we have identified 137 papers drawn from 33 nations. 37 papers in language other than English await translation. The currently included papers have generated 1413 different items of rate information data. In order to analyze these data we have undertaken several sequential filters in order to identify (a) non-overlapping data, (b) birth cohort study versus noncohort studies, (c) overall and sex-specific rates, (d) diagnostic criteria, (e) age ranges, (f) epoch of study, and (g) data on migrant or other special interest groups. In addition, we will examine the impact of urbanicity of site, age and/or sex standardization, and quality score on the incidence rates. The various discrete incidence rates will be presented graphically and the impact of various filters on these rates will be inspected using meta-analytic techniques. The use of meta-analysis may help elucidate the epidemiological landscape with respect to the incidence of schizophrenia and aid in the generation of new hypothesis. Acknowledgements: The Stanley Medical Research Institute supported project
Resumo:
Allergies are a major cause of chronic ill health in industrialised countries with the incidence of reported cases steadily increasing. This Research Focus details how bioinformatics is transforming the field of allergy through providing databases for management of allergen data, algorithms for characterisation of allergic crossreactivity, structural motifs and B- and T-cell epitopes, tools for prediction of allergenicity and techniques for genomic and proteomic analysis of allergens.
Resumo:
Human N-acetyltransferase Type I (NAT1) catalyses the acetylation of many aromatic amine and hydrazine compounds and it has been implicated in the catabolism of folic acid. The enzyme is widely expressed in the body, although there are considerable differences in the level of activity between tissues. A search of the mRNA databases revealed the presence of several NAT1 transcripts in human tissue that appear to be derived from different promoters. Because little is known about NAT1 gene regulation, the present study was undertaken to characterize one of the putative promoter sequences of the NAT1 gene located just upstream of the coding region. We show with reverse-transcriptase PCR that mRNA transcribed from this promoter (Promoter 1) is present in a variety of human cell-lines, but not in quiescent peripheral blood mononuclear cells. Using deletion mutant constructs, we identified a 20 bp sequence located 245 bases upstream of the translation start site which was sufficient for basal NAT1 expression. It comprised an AP-1 (activator protein 1)-binding site, flanked on either side by a TCATT motif. Mutational analysis showed that the AP-1 site and the 3' TCATT sequence were necessary for gene expression, whereas the 5' TCATT appeared to attenuate promoter activity. Electromobility shift assays revealed two specific bands made up by complexes of c-Fos/Fra, c-Jun, YY-1 (Yin and Yang 1) and possibly Oct-1. PMA treatment enhanced expression from the NAT1 promoter via the AP-1-binding site. Furthermore, in peripheral blood mononuclear cells, PMA increased endogenous NAT1 activity and induced mRNA expression from Promoter I, suggesting that it is functional in vivo.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Antibody phage display libraries are a useful tool in proteomic analyses. This study evaluated an antibody recombinant library for identification of sex-specific proteins on the sperm cell surface. The Griffin.1 library was used to produce phage antibodies capable of recognizing membrane proteins from Nelore sperm cells. After producing soluble monoclonal scFv, clones were screened on Simental sperm cells by flow cytometry and those that bound to 40-60% of cells were selected. These clones were re-analyzed using Nelore sperm cells and all clones bound to 40-60% of cells. Positive clones were submitted to a binding assay against male and female bovine leukocytes by flow cytometry and one clone preferentially bound to male cells. The results indicate that phage display antibodies are an alternative method for identification of molecules markers on sperm cells. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
There is a considerable body of new information on Gynecology and Obstetrics. To aid in keeping gynecologists updated, renowned periodicals publish review articles. Review articles enable the reader to obtain the best evidence for clinical or research issues from several individual articles. This enables the professional to make clinical decisions in the light of current knowledge. The different types of reviews and database that may be used for the elaboration of reviews are discussed in the present article. It is suggested that future reviews on Gynecology and Obstetrics include articles published in other idioms apart from English and that a larger number of database is researched. Thus, reviews will be not only more inclusive but more representative of the international literature.
Resumo:
With the proliferation of relational database programs for PC's and other platforms, many business end-users are creating, maintaining, and querying their own databases. More importantly, business end-users use the output of these queries as the basis for operational, tactical, and strategic decisions. Inaccurate data reduce the expected quality of these decisions. Implementing various input validation controls, including higher levels of normalisation, can reduce the number of data anomalies entering the databases. Even in well-maintained databases, however, data anomalies will still accumulate. To improve the quality of data, databases can be queried periodically to locate and correct anomalies. This paper reports the results of two experiments that investigated the effects of different data structures on business end-users' abilities to detect data anomalies in a relational database. The results demonstrate that both unnormalised and higher levels of normalisation lower the effectiveness and efficiency of queries relative to the first normal form. First normal form databases appear to provide the most effective and efficient data structure for business end-users formulating queries to detect data anomalies.