895 resultados para Integrated Database
Resumo:
Medication data retrieved from Australian Repatriation Pharmaceutical Benefits Scheme (RPBS) claims for 44 veterans residing in nursing homes and Pharmaceutical Benefits Scheme (PBS) claims for 898 nursing home residents were compared with medication data from nursing home records to determine the optimal time interval for retrieving claims data and its validity. Optimal matching was achieved using 12 weeks of RPBS claims data, with 60% of medications in the RPBS claims located in nursing home administration records, and 78% of medications administered to nursing home residents identified in RPBS claims. In comparison, 48% of medications administered to nursing home residents could be found in 12 weeks of PBS data, and 56% of medications present in PBS claims could be matched with nursing home administration records. RPBS claims data was superior to PBS, due to the larger number of scheduled items available to veterans and the veteran's file number, which acts as a unique identifier. These findings should be taken into account when using prescription claims data for medication histories, prescriber feedback, drug utilisation, intervention or epidemiological studies. (C) 2001 Elsevier Science Inc. All rights reserved.
Resumo:
The cost and risk associated with mineral exploration in Australia increases significantly as companies move into deeper regolith-covered terrain. The ability to map the bedrock and the depth of weathering within an area has the potential to decrease this risk and increase the effectiveness of exploration programs. This paper is the second in a trilogy concerning the Grant's Patch area of the Eastern Goldfields. The recent development of the VPmg potential field inversion program in conjunction with the acquisition of high-resolution gravity data over an area with extensive drilling provided an opportunity to evaluate three-dimensional gravity inversion as a bedrock and regolith mapping tool. An apparent density model of the study area was constructed, with the ground represented as adjoining 200 m by 200 m vertical rectangular prisms. During inversion VPmg incrementally adjusted the density of each prism until the free-air gravity response of the model replicated the observed data. For the Grant's Patch study area, this image of the apparent density values proved easier to interpret than the Bouguer gravity image. A regolith layer was introduced into the model and realistic fresh-rock densities assigned to each basement prism according to its interpreted lithology. With the basement and regolith densities fixed, the VPmg inversion algorithm adjusted the depth to fresh basement until the misfit between the calculated and observed gravity response was minimised. The resulting geometry of the bedrock/regolith contact largely replicated the base of weathering indicated by drilling with predicted depth of weathering values from gravity inversion typically within 15% of those logged during RAB and RC drilling.
Resumo:
The Brisbane River and Moreton Bay Study, an interdisciplinary study of Moreton Bay and its major tributaries, was initiated to address water quality issues which link sewage and diffuse loading with environmental degradation. Runoff and deposition of fine-grained sediments into Moreton Bay, followed by resuspension, have been linked with increased turbidity and significant loss of seagrass habitat. Sewage-derived nutrient enrichment, particularly nitrogen (N), has been linked to algal blooms by sewage plume maps. Blooms of a marine cyanobacterium, Lyngbya majuscula, in Moreton Bay have resulted in significant impacts on human health (e.g., contact dermatitis) and ecological health (e.g., seagrass loss), and the availability of dissolved iron from acid sulfate soil runoff has been hypothesised. The impacts of catchment activities resulting in runoff of sediments, nutrients and dissolved iron on the health of the Moreton Bay waterways are addressed. The Study, established by 6 local councils in association with two state departments in 1994, forms a regional component of a national and state program to achieve ecologically sustainable use of the waterways by protecting and enhancing their health, while maintaining economic and social development. The Study framework illustrates a unique integrated approach to water quality management whereby scientific research, community participation and the strategy development were done in parallel with each other. This collaborative effort resulted in a water quality management strategy which focuses on the integration of socioeconomic and ecological values of the waterways. This work has led to significant cost savings in infrastructure by providing a clear focus on initiatives towards achieving healthy waterways. The Study's Stage 2 initiatives form the basis for this paper.
Resumo:
Effluent water from shrimp ponds typically contains elevated concentrations of dissolved nutrients and suspended particulates compared to influent water. Attempts to improve effluent water quality using filter feeding bivalves and macroalgae to reduce nutrients have previously been hampered by the high concentration of clay particles typically found in untreated pond effluent. These particles inhibit feeding in bivalves and reduce photosynthesis in macroalgae by increasing effluent turbidity. In a small-scale laboratory study, the effectiveness of a three-stage effluent treatment system was investigated. In the first stage, reduction in particle concentration occurred through natural sedimentation. In the second stage, filtration by the Sydney rock oyster, Saccostrea commercialis (Iredale and Roughley), further reduced the concentration of suspended particulates, including inorganic particles, phytoplankton, bacteria, and their associated nutrients. In the final stage, the macroalga, Gracilaria edulis (Gmelin) Silva, absorbed dissolved nutrients. Pond effluent was collected from a commercial shrimp farm, taken to an indoor culture facility and was left to settle for 24 h. Subsamples of water were then transferred into laboratory tanks stocked with oysters and maintained for 24 h, and then transferred to tanks containing macroalgae for another 24 h. Total suspended solid (TSS), chlorophyll a, total nitrogen (N), total phosphorus (P), NH4+, NO3-, and PO43-, and bacterial numbers were compared before and after each treatment at: 0 h (initial); 24 h (after sedimentation); 48 h (after oyster filtration); 72 h (after macroalgal absorption). The combined effect of the sequential treatments resulted in significant reductions in the concentrations of all parameters measured. High rates of nutrient regeneration were observed in the control tanks, which did not contain oysters or macroalgae. Conversely, significant reductions in nutrients and suspended particulates after sedimentation and biological treatment were observed. Overall, improvements in water quality (final percentage of the initial concentration) were as follows: TSS (12%); total N (28%); total P (14%); NH4+ (76%); NO3- (30%); PO43-(35%); bacteria (30%); and chlorophyll a (0.7%). Despite the probability of considerable differences in sedimentation, filtration and nutrient uptake rates when scaled to farm size, these results demonstrate that integrated treatment has the potential to significantly improve water quality of shrimp farm effluent. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The detection of viable myocardium has important implications for management, but use of stress echocardiography to detect this is subjective and requires exposure to dobutamine. We investigated whether cyclic variation (CV) of integrated backscatter (IB) from the apical views could provide a resting study for detection of contractile reserve (CR) and prediction of myocardial viability in 27 patients with chronic ischemic left ventricular (LV) dysfunction. Repeat echocardiography was performed after 6.7 +/- 3.8 months of follow-up; 14 patients underwent revascularization and 13 were treated medically. Using a standardized dobutamine echocardiography (DbE) protocol, images from three apical views were acquired at 80-120 frames/sec at rest and during stress. CR was identified if improvement of wall motion was observed at low dose (5 or 10 mug/kg/min) DbE. Myocardial viability was characterized by improvement at follow-up echocardiography in patients with revascularization. CVIB at rest and low dose dobutamine were assessed in 194 segments with resting asynergy (severe hypokinesis or akinesis), of which 88 (45%) were in patients who underwent revascularization. Of these, CVIB could be measured in 190 (98%) segments at rest and 185 (95%) at low dose dobutamine. Sixty-two (33%) segments had CR during low dose DbE and 50 (57%) segments showed wall-motion recovery (myocardial viability) at follow-up echocardiography. Segments with CR had significantly higher CVIB at rest (P < 0.001) and low dose dobutamine (P = 0.005) than segments without CR. Using optimal thresholds of CVIB (> 8.2 dB) at rest, the accuracy of CVIB for detecting CR was 70%. Compared with nonviable segments, viable segments had significantly higher CVIB at rest (P < 0.001) and low dose dobutamine (P < 0.001). Using optimal thresholds of CVIB (> 5.3 dB) at rest, the accuracy of CVIB for detecting myocardial viability was 85%, which was higher than that in conventional DbE (62%, P < 0.01). Thus, assessment of CV.TB from the apical views is a feasible and accurate tool for detecting CR and predicting myocardial viability in chronic LV dysfunction.
Resumo:
Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
As end-user computing becomes more pervasive, an organization's success increasingly depends on the ability of end-users, usually in managerial positions, to extract appropriate data from both internal and external sources. Many of these data sources include or are derived from the organization's accounting information systems. Managerial end-users with different personal characteristics and approaches are likely to compose queries of differing levels of accuracy when searching the data contained within these accounting information systems. This research investigates how cognitive style elements of personality influence managerial end-user performance in database querying tasks. A laboratory experiment was conducted in which participants generated queries to retrieve information from an accounting information system to satisfy typical information requirements. The experiment investigated the influence of personality on the accuracy of queries of varying degrees of complexity. Relying on the Myers–Briggs personality instrument, results show that perceiving individuals (as opposed to judging individuals) who rely on intuition (as opposed to sensing) composed queries more accurately. As expected, query complexity and academic performance also explain the success of data extraction tasks.
Resumo:
OTseeker (Occupational Therapy Systematic Evaluation of Evidence) is a new resource for occupational therapists that has been designed with the principle aim of increasing access to research to support clinical decisions. It contains abstracts of systematic reviews and quality ratings of randomized controlled trials (RCTs) relevant to occupational therapy. It is available, free of charge, at www.otseeker.com. This paper describes the OTseeker database and provides an example of how it may support occupational therapy practice.
Resumo:
One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.
Resumo:
A proportion of melanoma,prone individuals in both familial and non,familial contexts has been shown to carry inactivating mutations in either CDKN2A or, rarely, CDK4. CDKN2A is a complex locus that encodes two unrelated proteins from alternately spliced transcripts that are read in different frames. The alpha transcript (exons 1a, 2, and 3) produces the p16INK4A cyclin-dependent kinase inhibitor, while the beta transcript (exons 1beta and 2) is translated as p14ARF, a stabilizing factor of p53 levels through binding to MDM2. Mutations in exon 2 can impair both polypeptides and insertions and deletions in exons 1alpha, 1beta, and 2, which can theoretically generate p16INK4A,p14ARF fusion proteins. No online database currently takes into account all the consequences of these genotypes, a situation compounded by some problematic previous annotations of CDKN2A related sequences and descriptions of their mutations. As an initiative of the international Melanoma Genetics Consortium, we have therefore established a database of germline variants observed in all loci implicated in familial melanoma susceptibility. Such a comprehensive, publicly accessible database is an essential foundation for research on melanoma susceptibility and its clinical application. Our database serves two types of data as defined by HUGO. The core dataset includes the nucleotide variants on the genomic and transcript levels, amino acid variants, and citation. The ancillary dataset includes keyword description of events at the transcription and translation levels and epidemiological data. The application that handles users' queries was designed in the model,view. controller architecture and was implemented in Java. The object-relational database schema was deduced using functional dependency analysis. We hereby present our first functional prototype of eMelanoBase. The service is accessible via the URL www.wmi.usyd.e, du.au:8080/melanoma.html.