946 resultados para Databases as Topic
Resumo:
In recent years, domestic business-to-business barter has become institutionalized as an alternative marketing exchange system in Australia, and elsewhere. This article reports the findings of a survey of 164 members of Australia's largest trade exchange, Bartercard There are few, if any, published empirical studies on this topic. This study is exploratory. Most firms surveyed are small firms in the services sectors. Although Bartercard has an extensive membership, trading within the system is limited with most members trading less than once per week and with barter transactions contributing less than 5% of their annual gross sales. The main benefits of membership include new customers and increased sales and networking opportunities. The main limitations include the limited functionality of the trade dollar limited trading opportunities, and practical trading difficulties. In selling, there appears to be no differential between the cash and trade prices, whereas trade dollars are discounted in purchasing. Participants acknowledge that business-to-business barter will remain and grow regardless of cyclical macroeconomic changes. (C) 1998 Elsevier Science Inc.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
Objective: To determine the incidence of interval cancers which occurred in the first 12 months after mammographic screening at a mammographic screening service. Design: Retrospective analysis of data obtained by crossmatching the screening Service and the New South Wales Central Cancer Registry databases. Setting: The Central & Eastern Sydney Service of BreastScreen NSW. Participants: Women aged 40-69 years at first screen, who attended for their first or second screen between 1 March 1988 and 31 December 1992. Main outcome measures: Interval-cancer rates per 10 000 screens and as a proportion of the underlying incidence of breast cancer (as estimated by the underlying rate in the total NSW population). Results: The 12-month interval-cancer incidence per 10 000 screens was 4.17 for the 40-49 years age group (95% confidence interval [CI], 1.35-9.73) and 4.64 for the 50-69 years age group (95% CI, 2.47-7.94). Proportional incidence rates were 30.1% for the 40-49 years age group (95% CI, 9.8-70.3) and 22% for the 50-69 years age group (95% CI, 11.7-37.7). There was no significant difference between the proportional incidence rate for the 50-69 years age group for the Central & Eastern Sydney Service and those of major successful overseas screening trials. Conclusion: Screening quality was acceptable and should result in a significant mortality reduction in the screened population. Given the small number of cancers involved, comparison of interval-cancer statistics of mammographic screening programs with trials requires age-specific or age-adjusted data, and consideration of confidence intervals of both program and trial data.
Resumo:
In response to methodological concerns associated with previous research into the educational characteristics of students with high or low self-concept, the topic was re-examined using a significantly more representative sample and a contemporary self-concept measure. From an initial screening of 515 preadolescent, coeducational students in 18 schools, students significantly high or low in self-concept were compared using standardized tests in reading, spelling, and mathematics, and teacher interviews to determine students' academic and nonacademic characteristics. The teachers were not informed of the self-concept status of the students. Compared to students with low self-concept, students with high self-concept were rated by teachers as being more popular, cooperative, and persistent in class, showed greater leadership, were lower in anxiety, had more supportive families, and had higher teacher expectations for their future success. Teachers observed that students with low self-concept were quiet and withdrawn, while peers with high self-concept were talkative and more dominating with peers. Students with lower self-concepts were also lower than their peers in reading, spelling, and mathematical abilities. The findings support the notion that there is an interactive relationship between self-concept and achievement. (C) 1998 John Wiley & Sons, Inc.
Resumo:
The task of segmenting cell nuclei from cytoplasm in conventional Papanicolaou (Pap) stained cervical cell images is a classical image analysis problem which may prove to be crucial to the development of successful systems which automate the analysis of Pap smears for detection of cancer of the cervix. Although simple thresholding techniques will extract the nucleus in some cases, accurate unsupervised segmentation of very large image databases is elusive. Conventional active contour models as introduced by Kass, Witkin and Terzopoulos (1988) offer a number of advantages in this application, but suffer from the well-known drawbacks of initialisation and minimisation. Here we show that a Viterbi search-based dual active contour algorithm is able to overcome many of these problems and achieve over 99% accurate segmentation on a database of 20 130 Pap stained cell images. (C) 1998 Elsevier Science B.V. All rights reserved.
Resumo:
Motor vehicle crashes are the leading cause of injury death for international tourists. This makes road safety an important issue for tourism authorities. Unfortunately, as it is in other areas of tourist health, the common response from the travel and tourism industry is to remain silent about this problem and to leave any mishaps in the hands of insurers. At the same time, but for different reasons, international tourists are not usually targeted for road safety initiatives by transport authorities. Given that there are considerable 'hidden' costs associated with international tourists and motor vehicle crashes, the topic should be of concern to both tourism and transport groups. This paper examines issues concerned with driving in unfamiliar surroundings for international visitors in Australia, and proposes a national research and management programme to guide policy and planning in the area. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
Objective: From Census data, to document the distribution of general practitioners in Australia and to estimate the number of general practitioners needed to achieve an equitable distribution accounting for community health need. Methods: Data on location of general practitioners, population size and crude mortality by statistical division (SD) were obtained from the Australian Bureau of Statistics. The number of patients per general practitioner by SD was calculated and plotted. Using crude mortality to estimate community health need, a ratio of the number of general practitioners per person:mortality was calculated for all Australia and for each SD (the Robin Hood Index). From this, the number of general practitioners needed to achieve equity was calculated. Results: In all, 26,290 general practitioners were identified in 57 SDs. The mean number of people per general practitioner is 707, ranging from 551 to 1887. Capital city SDs have most favourable ratios. The Robin Hood Index for Australia is 1, and ranges from 0.32 (relatively under-served) to 2.46 (relatively over-served). Twelve SDs (21%) including all capital cities and 65% of all Australians, have a Robin Hood Index > 1. To achieve equity per capita 2489 more general practitioners (10% of the current workforce) are needed. To achieve equity by the Robin Hood Index 3351 (13% of the current workforce) are needed. Conclusions: The distribution of general practitioners in Australia is skewed. Nonmetropolitan areas are relatively underserved. Census data and the Robin Hood Index could provide a simple means of identifying areas of need in Australia.
Resumo:
We have isolated a family of insect-selective neurotoxins from the venom of the Australian funnel-web spider that appear to be good candidates for biopesticide engineering. These peptides, which we have named the Janus-faced atracotoxins (J-ACTXs), each contain 36 or 37 residues, with four disulfide bridges, and they show no homology to any sequences in the protein/DNA databases. The three-dimensional structure of one of these toxins reveals an extremely rare vicinal disulfide bridge that we demonstrate to be critical for insecticidal activity. We propose that J-ACTX comprises an ancestral protein fold that we refer to as the disulfide-directed beta-hairpin.
Resumo:
An extensive research program focused on the characterization of various metallurgical complex smelting and coal combustion slags is being undertaken. The research combines both experimental and thermodynamic modeling studies. The approach is illustrated by work on the PbO-ZnO-Al2O3-FeO-Fe2O3-CaO-SiO2 system. Experimental measurements of the liquidus and solidus have been undertaken under oxidizing and reducing conditions using equilibration, quenching, and electron probe X-ray microanalysis. The experimental program has been planned so as to obtain data for thermodynamic model development as well as for pseudo-ternary Liquidus diagrams that can be used directly by process operators. Thermodynamic modeling has been carried out using the computer system FACT, which contains thermodynamic databases with over 5000 compounds and evaluated solution models. The FACT package is used for the calculation of multiphase equilibria in multicomponent systems of industrial interest. A modified quasi-chemical solution model is used for the liquid slag phase. New optimizations have been carried out, which significantly improve the accuracy of the thermodynamic models for lead/zinc smelting and coal combustion processes. Examples of experimentally determined and calculated liquidus diagrams are presented. These examples provide information of direct relevance to various metallurgical smelting and coal combustion processes.
Resumo:
Some diverse indicators used to measure the innovation process are considered, They include those with art aggregate, and often national, focus, and rely on data from scientific publications, patents and R&D expenditures, etc. Others have a firm-level perspective, relying primarily on surveys or case studies. Also included are indicators derived from specialized databases, or consensual agreements reached through foresight exercises. There is an obvious need for greater integration of the various approaches to capture move effectively the richness of available data and better reflect the reality of innovation. The focus for such integration could be in the area of technology strategy, which integrates the diverse scientific, technological, and innovation activities of firms within their operating environments; improved capacity to measure it has implications for policy-makers, managers and researchers.
Resumo:
Background: The Perceived Need for Care Questionnaire (PNCQ) was designed for the Australian National Survey of Mental Health and Wellbeing. The PNCQ complemented collection of data on diagnosis and disability with the survey participants' perceptions of their needs for mental health care and the meeting of those needs. The four-stage design of the PNCQ mimics a conversational exploration of the topic of perceived needs. Five categories of perceived need are each assigned to one of four levels of perceived need (no need, unmet need, partially met need and met need). For unmet need and partially met need, information on barriers to care is collected, Methods: Inter-rater reliabilities of perceived needs assessed by the PNCQ were examined in a study of 145 anxiety clinic attenders. Construct validity of these items was tested, using a multi-trait multi-method approach and hypotheses regarding extreme groups, in a study with a sample of 51 general practice and community psychiatric service patients. Results: The instrument is brief to administer and has proved feasible for use in various settings. Inter-rater reliabilities for major categories, measured by the kappa statistic, exceeded 0.60 in most cases; for the summary category of all perceived needs, inter-rater reliability was 0.62. The multi-trait multi-method approach lent support to the construct validity of the instrument, as did findings in extreme groups. Conclusions: The PNCQ shows acceptable feasibility, reliability and validity, adding to the range of assessment tools available for epidemiological and health services research.
Resumo:
The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.
Resumo:
The explosive growth in biotechnology combined with major advancesin information technology has the potential to radically transformimmunology in the postgenomics era. Not only do we now have readyaccess to vast quantities of existing data, but new data with relevanceto immunology are being accumulated at an exponential rate. Resourcesfor computational immunology include biological databases and methodsfor data extraction, comparison, analysis and interpretation. Publiclyaccessible biological databases of relevance to immunologists numberin the hundreds and are growing daily. The ability to efficientlyextract and analyse information from these databases is vital forefficient immunology research. Most importantly, a new generationof computational immunology tools enables modelling of peptide transportby the transporter associated with antigen processing (TAP), modellingof antibody binding sites, identification of allergenic motifs andmodelling of T-cell receptor serial triggering.
Resumo:
There has been a debate on whether or not the incidence of schizophrenia varies across time and place. In order to optimise the evidence upon which this debate is based, we have undertaken a systematicsystematic review of the literature. In this paper we provide an overview of the methods of the review and a preliminary analysis of the studies identified to date. Electronic databases (Medline, Psychlnfo, Embase, LILAC) were systematically searched for articles published between January 1965 and December 2001. The search terms were: (schizo* OR psycho*)AND (incidence OR prevalence). References were also identified from review articles, reference list and by writing to authors. To date we have identified 137 papers drawn from 33 nations. 37 papers in language other than English await translation. The currently included papers have generated 1413 different items of rate information data. In order to analyze these data we have undertaken several sequential filters in order to identify (a) non-overlapping data, (b) birth cohort study versus noncohort studies, (c) overall and sex-specific rates, (d) diagnostic criteria, (e) age ranges, (f) epoch of study, and (g) data on migrant or other special interest groups. In addition, we will examine the impact of urbanicity of site, age and/or sex standardization, and quality score on the incidence rates. The various discrete incidence rates will be presented graphically and the impact of various filters on these rates will be inspected using meta-analytic techniques. The use of meta-analysis may help elucidate the epidemiological landscape with respect to the incidence of schizophrenia and aid in the generation of new hypothesis. Acknowledgements: The Stanley Medical Research Institute supported project
Resumo:
Allergies are a major cause of chronic ill health in industrialised countries with the incidence of reported cases steadily increasing. This Research Focus details how bioinformatics is transforming the field of allergy through providing databases for management of allergen data, algorithms for characterisation of allergic crossreactivity, structural motifs and B- and T-cell epitopes, tools for prediction of allergenicity and techniques for genomic and proteomic analysis of allergens.