28 resultados para Evaluation Methods
em University of Queensland eSpace - Australia
Resumo:
The national and Victorian burden of disease studies in Australia set out to examine critically the methods used in the Global Burden of Disease study to estimate the burden of mental disorders. The main differences include the use of a different set of disability weights allowing estimates in greater detail by level of severity, adjustments for comorbidity between mental disorders, a greater number of menta I disorders measured, and model ling of substance use disorders, anxiety disorders and bipolar disorder as chronic conditions. Uniform age-weighting in the Australian studies produces considerably lower estimates of the burden due to mental disorders in comparison with age-weighted disability-adjusted life years. A lack of follow-up data on people with mental disorders who are identified in cross-sectional surveys poses the greatest challenge in determining the burden of mental disorders more accurately.
Resumo:
Objective: The Assessing Cost-Effectiveness - Mental Health (ACE-MH) study aims to assess from a health sector perspective, whether there are options for change that could improve the effectiveness and efficiency of Australia's current mental health services by directing available resources toward 'best practice' cost-effective services. Method: The use of standardized evaluation methods addresses the reservations expressed by many economists about the simplistic use of League Tables based on economic studies confounded by differences in methods, context and setting. The cost-effectiveness ratio for each intervention is calculated using economic and epidemiological data. This includes systematic reviews and randomised controlled trials for efficacy, the Australian Surveys of Mental Health and Wellbeing for current practice and a combination of trials and longitudinal studies for adherence. The cost-effectiveness ratios are presented as cost (A$) per disability-adjusted life year (DALY) saved with a 95% uncertainty interval based on Monte Carlo simulation modelling. An assessment of interventions on 'second filter' criteria ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') allows broader concepts of 'benefit' to be taken into account, as well as factors that might influence policy judgements in addition to cost-effectiveness ratios. Conclusions: The main limitation of the study is in the translation of the effect size from trials into a change in the DALY disability weight, which required the use of newly developed methods. While comparisons within disorders are valid, comparisons across disorders should be made with caution. A series of articles is planned to present the results.
Resumo:
Document classification is a supervised machine learning process, where predefined category labels are assigned to documents based on the hypothesis derived from training set of labelled documents. Documents cannot be directly interpreted by a computer system unless they have been modelled as a collection of computable features. Rogati and Yang [M. Rogati and Y. Yang, Resource selection for domain-specific cross-lingual IR, in SIGIR 2004: Proceedings of the 27th annual international conference on Research and Development in Information Retrieval, ACM Press, Sheffied: United Kingdom, pp. 154-161.] pointed out that the effectiveness of document classification system may vary in different domains. This implies that the quality of document model contributes to the effectiveness of document classification. Conventionally, model evaluation is accomplished by comparing the effectiveness scores of classifiers on model candidates. However, this kind of evaluation methods may encounter either under-fitting or over-fitting problems, because the effectiveness scores are restricted by the learning capacities of classifiers. We propose a model fitness evaluation method to determine whether a model is sufficient to distinguish positive and negative instances while still competent to provide satisfactory effectiveness with a small feature subset. Our experiments demonstrated how the fitness of models are assessed. The results of our work contribute to the researches of feature selection, dimensionality reduction and document classification.
Resumo:
Continuing Professional Development (CPD) is seen as a vital part of a professional engineer’s career, by professional engineering institutions as well as individual engineers. Factors such as ever-changing workforce requirements and rapid technological change have resulted in engineers no longer being able to rely just on the skills they learnt at university or can pick up on the job; they must undergo a structured professional development with clear objectives to develop further professional knowledge, values and skills. This paper presents a course developed for students undertaking a Master of Engineering or Master of Project Management at the University of Queensland. This course was specifically designed to help students plan their continuing professional development, while developing professional skills such as communication, ethical reasoning, critical judgement and the need for sustainable development. The course utilised a work integrated learning pedagogy applied within a formal learning environment, and followed the competency based chartered membership program of Engineers Australia, the peak professional body of engineers in Australia. The course was developed and analysed using an action learning approach. The main research question was “Can extra teaching and learning activities be developed that will simulate workplace learning?” The students continually assessed and reflected upon their current competencies, skills and abilities, and planed for the future attainment of specific competencies which they identified as important to their future careers. Various evaluation methods, including surveys before and after the course, were used to evaluate the action learning intervention. It was found that the assessment developed for the course was one of the most important factors, not only in driving student learning, as is widely accepted, but also in changing the students’ understandings and acceptance of the need for continuous professional development. The students also felt that the knowledge, values and skills they developed would be beneficial for their future careers, as they were developed within the context of their own professional development, rather than to just get through the course. © 2005, American Society for Engineering Education
Resumo:
This economic evaluation was part of the Australian National Evaluation of Pharmacotherapies for Opioid Dependence (NEPOD) project. Data from four trials of heroin detoxification methods, involving 365 participants, were pooled to enable a comprehensive comparison of the cost-effectiveness of five inpatient and outpatient detoxification methods. This study took the perspective of the treatment provider in assessing resource use and costs. Two short-term outcome measures were used-achievement of an initial 7-day period of abstinence, and entry into ongoing post-detoxification treatment. The mean costs of the various detoxification methods ranged widely, from AUD $491 (buprenorphine-based outpatient); to AUD $605 for conventional outpatient; AUD $1404 for conventional inpatient; AUD $1990 for rapid detoxification under sedation; and to AUD $2689 for anaesthesia per episode. An incremental cost-effectiveness analysis was carried out using conventional outpatient detoxification as the base comparator. The buprenorphine-based outpatient detoxification method was found to be the most cost-effective method overall, and rapid opioid detoxification under sedation was the most costeffective inpatient method.
Resumo:
Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.
Resumo:
Mycophenolic acid is an immunosuppressant administered as a bioavailable ester, mycophenolate mofetil. The pharmacokinetics of mycophenolic acid have been reported to be variable. Accurate measurement of concentrations of this drug could be important to adjust doses. The aim of this study was to compare the enzyme-multiplied immunoassay technique (EMIT [Dade Behring; San Jose, CA, U.S.A.]) for mycophenolic acid with a high-performance liquid chromatographic (HPLC) assay using samples collected from renal transplant recipients. The HPLC assay used solid phase extraction and a C18 stationary phase with ultraviolet (UV) detection (254 nm). The immunoassay required no manual sample preparation. Plasma samples (n = 102) from seven patients, collected at various times after a dose, were analyzed using both methods. Both assays fulfilled quality-control criteria. Higher concentrations were consistently measured in patient samples when using EMIT. The mean (+/- standard deviation [SD]) bias (EMIT-HPLC) was 1.88 +/- 0.86 mg/L. The differences in concentrations were higher in the middle of a dosage interval, suggesting that a metabolite might have been responsible for overestimation. Measurement of glucuronide concentrations by HPLC demonstrated only a weak correlation between assay differences and glucuronide concentrations. If the crossreacting substance is active, EMIT could provide a superior measure of immunosuppression; if inactive, further work is needed to improve antibody specificity. In conclusion, it was found that EMIT overestimates the concentration of mycophenolic acid in plasma samples from renal transplant recipients compared with HPLC analysis.
Resumo:
Aims Topical sunscreens are routinely applied to the skin by a large percentage of the population. This study assessed the extent of absorption of a number of common chemical sunscreen agents into and through human skin following application of commercially available products. Methods Sunscreen products were applied to excised human epidermis in Franz diffusion cells with the amount penetrating into and across the epidermis assessed by h.p.l.c. for 8 h following application. Results All sunscreen agents investigated penetrated into the skin (0.25 g m(-2) or 14% of applied dose), but only benzophenone-3 passed through the skin in significant amounts (0.08 g m(-2) or 10% of the applied dose). With one exception, suncreen agents in corresponding products marketed for adults and children had similar skin penetration profiles. Conclusions Whilst limited absorption across the skin was observed for the majority of the sunscreens tested, benzophenone-3 demonstrated sufficiently high penetration to warrant further investigation of its continued application.
Resumo:
The objective of the present study was to evaluate the performance of a new bioelectrical impedance instrument, the Soft Tissue Analyzer (STA), which predicts a subject's body composition. A cross-sectional population study in which the impedance of 205 healthy adult subjects was measured using the STA. Extracellular water (ECW) volume (as a percentage of total body water, TBW) and fat-free mass (FFM) were predicted by both the STA and a compartmental model, and compared according to correlation and limits of agreement analysis, with the equivalent data obtained by independent reference methods of measurement (TBW measured by D2O dilution, and FFM measured by dual-energy X-ray absorptiometry). There was a small (2.0 kg) but significant (P < 0.02) difference in mean FFM predicted by the STA, compared with the reference technique in the males, but not in the females (-0.4 kg) or in the combined group (0.8 kg). Both methods were highly correlated. Similarly, small but significant differences for predicted mean ECW volume were observed. The limits of agreement for FFM and ECW were -7.5-9.9 and -4.1-3.0 kg, respectively. Both FFM and ECW (as a percentage of TBW) are well predicted by the STA on a population basis, but the magnitude of the limits of agreement with reference methods may preclude its usefulness for predicting body composition in an individual. In addition, the theoretical basis of an impedance method that does not include a measure of conductor length requires further validation. (C) Elsevier Science Inc. 2000.
Resumo:
Objective: A consequence of the integration of psychiatry into acute and public health medicine is that psychiatrists are being asked to evaluate their services. There is pressure on mental health-care systems because it is recognized that funds should be directed where they can provide the best health outcomes, and also because there are resource constraints which limit our capacity to meet all demands for health care. This pressure can be responded to by evaluation which demonstrates the effectiveness and efficiency of psychiatric treatment. This paper seeks to remind psychiatrists of the fundamental principles of economic evaluation in the hope that these will enable psychiatrists to understand the methods used in evaluation and to work comfortably with evaluators. Method: The paper reviews the basic principles behind economic evaluation, illustrating these with reference to case studies. It describes: (i) the cost of the burden of illness and treatment, and how these costs are measured; (ii) the measurement of treatment outcomes, both as changes in health status and as resources saved; and (iii) the various types of economic evaluation, including cost-minimization, cost-effectiveness, cost-utility and cost-benefit analysis. Results: The advice in the paper provides psychiatrists with the necessary background to work closely with evaluators. A checklist of the critical questions to be addressed is provided as a guide for those undertaking economic evaluations. Conclusions: If psychiatrists are willing to learn the basic principles of economic evaluation and to apply these, they can respond to the challenges of evaluation.
Resumo:
The movement of chemicals through the soil to the groundwater or discharged to surface waters represents a degradation of these resources. In many cases, serious human and stock health implications are associated with this form of pollution. The chemicals of interest include nutrients, pesticides, salts, and industrial wastes. Recent studies have shown that current models and methods do not adequately describe the leaching of nutrients through soil, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. This inaccuracy results primarily from ignoring soil structure and nonequilibrium between soil constituents, water, and solutes. A multiple sample percolation system (MSPS), consisting of 25 individual collection wells, was constructed to study the effects of localized soil heterogeneities on the transport of nutrients (NO3-, Cl-, PO43-) in the vadose zone of an agricultural soil predominantly dominated by clay. Very significant variations in drainage patterns across a small spatial scale were observed tone-way ANOVA, p < 0.001) indicating considerable heterogeneity in water flow patterns and nutrient leaching. Using data collected from the multiple sample percolation experiments, this paper compares the performance of two mathematical models for predicting solute transport, the advective-dispersion model with a reaction term (ADR), and a two-region preferential flow model (TRM) suitable for modelling nonequilibrium transport. These results have implications for modelling solute transport and predicting nutrient loading on a larger scale. (C) 2001 Elsevier Science Ltd. All rights reserved.