24 resultados para Gold standard creation
em University of Queensland eSpace - Australia
Resumo:
For several decades, a dose of 25 kGy of gamma irradiation has been recommended for terminal sterilization of medical products, including bone allografts. Practically, the application of a given gamma dose varies from tissue bank to tissue bank. While many banks use 25 kGy, some have adopted a higher dose, while some choose lower doses, and others do not use irradiation for terminal sterilization. A revolution in quality control in the tissue banking industry has occurred in line with development of quality assurance standards. These have resulted in significant reductions in the risk of contamination by microorganisms of final graft products. In light of these developments, there is sufficient rationale to re-establish a new standard dose, sufficient enough to sterilize allograft bone, while minimizing the adverse effects of gamma radiation on tissue properties. Using valid modifications, several authors have applied ISO standards to establish a radiation dose for bone allografts that is specific to systems employed in bone banking. These standards, and their verification, suggest that the actual dose could be significantly reduced from 25 kGy, while maintaining a valid sterility assurance level (SAL) of 10−6. The current paper reviews the methods that have been used to develop radiation doses for terminal sterilization of medical products, and the current trend for selection of a specific dose for tissue banks.
Resumo:
This study examined the test performance of distortion product otoacoustic emissions (DPOAEs) when used as a screening tool in the school setting. A total of 1003 children (mean age 6.2 years, SD = 0.4) were tested with pure-tone screening, tympanometry, and DPOAE assessment. Optimal DPOAE test performance was determined in comparison with pure-tone screening results using clinical decision analysis. The results showed hit rates of 0.86, 0.89, and 0.90, and false alarm rates of 0.52, 0.19, and 0.22 for criterion signal-to-noise ratio (SNR) values of 4, 5, and 11 dB at 1.1, 1.9, and 3.8 kHz respectively. DPOAE test performance was compromised at 1.1 kHz. In view of the different test performance characteristics across the frequencies, the use of a fixed SNR as a pass criterion for all frequencies in DPOAE assessments is not recommended. When compared to pure tone plus tympanometry results, the DPOAEs showed deterioration in test performance, suggesting that the use of DPOAEs alone might miss children with subtle middle ear dysfunction. However, when the results of a test protocol, which incorporates both DPOAEs and tympanometry, were used in comparison with the gold standard of pure-tone screening plus tympanometry, test performance was enhanced. In view of its high performance, the use of a protocol that includes both DPOAEs and tympanometry holds promise as a useful tool in the hearing screening of schoolchildren, including difficult-to-test children.
Resumo:
To fill a gap in knowledge about the effectiveness of brief intervention for hazardous alcohol use among Indigenous Australians, we attempted to implement a randomised controlled trial in an urban Aboriginal Medical Service (AMS) as a joint AMS-university partnership. Because of low numbers of potential participants being screened, the RCT was abandoned in favour of a two-part demonstration project. Only 16 clients were recruited for follow-up in six-months, and the trial was terminated. Clinic, patient, Aboriginal health worker, and GP factors, interacting with study design factors, all contributed to our inability to implement the trial as designed. The key points to emerge from the study are that alcohol misuse is a difficult issue to manage in an Indigenous primary health care setting; RCTs involving inevitably complex study protocols may not be acceptable or sufficiently adaptable to make them viable in busy, Indigenous primary health care settings; and gold-standard RCT-derived evidence for the effectiveness of many public health interventions in Indigenous primary health care settings may never be available, and decisions about appropriate interventions will often have to be based on qualitative assessment of appropriateness and evidence from other populations and other settings.
Resumo:
Six of the short dietary questions used in the 1995 National Nutrition Survey (see box below) were evaluated for relative validity both directly and indirectly and for consistency, by documenting the differences in mean intakes of foods and nutrients as measured on the 24-hour recall, between groups with different responses to the short questions. 1. Including snacks, how many times do you usually have something to eat in a day including evenings? 2. How many days per week do you usually have something to eat for breakfast? 3. In the last 12 months, were there any times that you ran out of food and couldn’t afford to buy more? 4. What type of milk do you usually consume? 5. How many serves of vegetables do you usually eat each day? (a serve = 1/2 cup cooked vegetables or 1 cup of salad vegetables) 6. How many serves of fruit do you usually eat each day? (a serve = 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces) These comparisons were made for males and females overall and for population sub-groups of interest including: age, socio-economic disadvantage, region of residence, country of birth, and BMI category. Several limitations to this evaluation of the short questions, as discussed in the report, need to be kept in mind including: · The method for comparison available (24-hour recall) was not ideal (gold standard); as it measures yesterday’s intake. This limitation was overcome by examining only mean differences between groups of respondents, since mean intake for a group can provide a reasonable approximation for ‘usual’ intake. · The need to define and identify, post-hoc, from the 24-hour recall the number of eating occasions, and occasions identified by the respondents as breakfast. · Predetermined response categories for some of the questions effectively limited the number of categories available for evaluation. · Other foods and nutrients, not selected for this evaluation, may have an indirect relationship with the question, and might have shown stronger and more consistent responses. · The number of responses in some categories of the short questions eg for food security may have been too small to detect significant differences between population sub-groups. · No information was available to examine the validity of these questions for detecting differences over time (establishing trends) in food habits and indicators of selected nutrient intakes. By contrast, the strength of this evaluation was its very large sample size, (atypical of most validation studies of dietary assessment) and thus, the opportunity to investigate question performance in a range of broad population sub-groups compared with a well-conducted, quantified survey of intakes. The results of the evaluation are summarised below for each of the questions and specific recommendations for future testing, modifications and use provided for each question. The report concludes with some general recommendations for the further development and evaluation of short dietary questions.
Resumo:
The insulin hypoglycemia test (IHT) is widely regarded as the 'gold standard' for dynamic stimulation of the hypothalamic-pituitary-adrenal (HPA) axis. This study aimed to investigate the temporal relationship between a rapid decrease in plasma glucose and the corresponding rise in plasma adenocorticotropic hormone (ACTH), and to assess the reproducibility of hormone responses to hypoglycemia in normal humans. Ten normal subjects underwent IHTs, using an insulin dose of 0.15 U/kg. Of these, eight had a second IHT (IHT2) and three went on to a third test (IHT3). Plasma ACTH and cortisol were measured at 15-min intervals and, additionally, in four IHT2s and the three IHT3s, ACTH was measured at 2.5- or 5-min intervals. Mean glucose nadirs and mean ACTH and cortisol responses were not significantly different between IHT1, IHT2 and IHT3. Combined data from all 21 tests showed the magnitude of the cortisol responses, but not the ACTH responses, correlated significantly with the depth and duration of hypoglycemia. All subjects achieved glucose concentrations of of less than or equal to 1.6 mmol/l before any detectable rise in ACTH occurred. In the seven tests performed with frequent sampling, an ACTH rise never preceeded the glucose nadir, but occurred at the nadir, or up to 15 min after. On repeat testing, peak ACTH levels varied markedly within individuals, whereas peak cortisol levels were more reproducible (mean coefficient of variation 7%). In conclusion, hypoglycemia of less than or equal to 1.6 mmol/l was sufficient to cause stimulation of the HPA axis in all 21 IHTs conducted in normal subjects. Nonetheless; our data cannot reveal whether higher glucose nadirs would stimulate increased HPA axis activity in all subjects. Overall, the cortisol response to hypoglycemia is more reproducible than the ACTH response but, in an individual subject, the difference in peak cortisol between two IHTs may exceed 100 nmol/l.
Resumo:
We have identified novel adjuvant activity in specific cytosol fractions from trophozoites of Giardia isolate BRIS/95/HEPU/2041 (J. A. Upcroft, P. A. McDonnell, and P. Upcroft, Parasitol. Today, 14:281-284, 1998). Adjuvant activity was demonstrated in the systemic and mucosal compartments when Giardia extract was coadministered orally with antigen to mice. Enhanced antigen-specific serum antibody responses were demonstrated by enzyme-linked immunosorbent. assay to be comparable to those generated by the gold standard, mucosal adjuvant cholera toxin. A source of adjuvant activity was localized to the cytosolic component of the parasite. Fractionation of the cytosol produced fraction pools, some of which, when coadministered with antigen, stimulated an enhanced antigen-specific serum response. The toxic component of conventional mucosal adjuvants is associated with adjuvant activity; therefore, in a similar way, the toxin-like attributes of BRIS/95/HEPU/2041 may be responsible for its adjuvanticity. Complete characterization of the adjuvant is under way.
Resumo:
The flock-level sensitivity of pooled faecal culture and serological testing using AGID for the detection of ovine Johne's disease-infected flocks were estimated using non-gold-standard methods. The two tests were compared in an extensive field trial in 296 flocks in New South Wales during 1998. In each flock, a sample of sheep was selected and tested for ovine Johne's disease using both the AGID and pooled faecal culture. The flock-specificity of pooled faecal culture also was estimated from results of surveillance and market-assurance testing in New South Wales. The overall flock-sensitivity of pooled faecal culture was 92% (95% CI: 82.4 and 97.4%) compared to 61% (50.5 and 70.9%) for serology (assuming that both tests were 100% specific). In low-prevalence flocks (estimated prevalence
Resumo:
Indicators are valuable tools used to measure progress towards a desired health outcome. Increased awareness of the public health burden due to injury has lead to a concomitant interest in monitoring the impact of national initiatives that aim to reduce the size of the burden. Several injury indicators have now been proposed. This study examines the ability of each of the suggested indicators to reflect the nature and extent of the burden of non-fatal injury. A criterion validity, population-based, prospective cohort study was conducted in Brisbane, a sub-tropical Metropolitan City on the eastern seaboard of Australia, over a 12-month period between 1 January and 31 December 1998. Neither the presence of a long bone fracture nor the need for hospitalisation for 4 or more days were sensitive or specific indicators for 'serious' or major injury as defined by the 'Gold Standard' Injury Severity Score (ISS). Subsequent analysis, using other public health outcome measures demonstrated that the major component of the illness burden of injury was in fact due to 'minor' not serious injury. However, the suggested indicators demonstrated low sensitivity and specificity for these outcomes as well. The results of the study support the need to include at least all hospitalisations in any population-based measure of injury and not attempt to simplify the indicator to a more convenient measure aimed at identifying just those cases of,serious' injury.
Resumo:
This paper presents a pilot study of a brief, group-based, cognitive-behavioural intervention for anxiety-disordered children. Five children (aged 7 to 13 years) diagnosed with a clinically significant anxiety disorder were treated with a recently developed 6-session, child-focused, cognitive-behavioural intervention that was evaluated using multiple measures (including structured diagnostic interview, self-report questionnaires and behaviour rating scales completed by parents) over four follow-up occasions (posttreatment, 3-month follow-up, 6-month follow-up and 12-month follow-up). This trial aimed to (a) evaluate the conclusion suggested by the research of Cobham, Dadds, and Spence (1998) that anxious children with non-anxious parents require a child-focused intervention only in order to demonstrate sustained clinical gains; and (b) to evaluate a new and more cost-effective child-focused cognitive-behavioural intervention. Unfortunately, the return rate of the questionnaires was poor, rendering this data source of questionable value. However, diagnostic interviews (traditionally the gold standard in terms of outcome in this research area) were completed for all children at all follow-up points. Changes in diagnostic status indicated that meaningful treatment-related gains had been achieved and were maintained over the full follow-up period. The results would thus seem to support the principle of participant-intervention matching proposed by Cobham et al. (1998), as well as the utility of the more brief intervention evaluated.
Resumo:
Manual curation has long been held to be the gold standard for functional annotation of DNA sequence. Our experience with the annotation of more than 20,000 full-length cDNA sequences revealed problems with this approach, including inaccurate and inconsistent assignment of gene names, as well as many good assignments that were difficult to reproduce using only computational methods. For the FANTOM2 annotation of more than 60,000 cDNA clones, we developed a number of methods and tools to circumvent some of these problems, including an automated annotation pipeline that provides high-quality preliminary annotation for each sequence by introducing an uninformative filter that eliminates uninformative annotations, controlled vocabularies to accurately reflect both the functional assignments and the evidence supporting them, and a highly refined, Web-based manual annotation tool that allows users to view a wide array of sequence analyses and to assign gene names and putative functions using a consistent nomenclature. The ultimate utility of our approach is reflected in the low rate of reassignment of automated assignments by manual curation. Based on these results, we propose a new standard for large-scale annotation, in which the initial automated annotations are manually investigated and then computational methods are iteratively modified and improved based on the results of manual curation.
Resumo:
Angiography is usually performed as the preoperative road map for those requiring revascularization for lower extremity peripheral arterial disease (PAD). The alternative investigations are ultrasound, 3-D magnetic resonance angiography (3-D MRA) and computed tomography angiography. This pilot study aimed to assess whether 3-D MRA could replace the gold standard angiography in preoperative planning. Eight patients considered for aortoiliac or infrainguinal arterial bypass surgery were recruited. All underwent both imaging modalities within 7 days. A vascular surgeon and a radiologist each reported on the images from both the 3-D MRA and the angiography, with blinding to patient details and each others reports. Comparisons were made between the reports for the angiographic and the 3-D MRA images, and between the reports of the vascular surgeon and the radiologist. Compared to the gold standard angiogram, 3-D MRA had a sensitivity of 77% and specificity of 94% in detecting occlusion, and a sensitivity of 72% and specificity of 90% in differentiating high grade (> 50%) versus low grade (< 50%) stenoses. There was an overall concordance of 78% between the two investigations with a range of 62% in the peroneal artery to 94% in the aorta. 3-D MRA showed flow in 23% of cases where conventional angiography showed no flow. In the present pilot study, 3-D MRA had reasonable concordance with the gold standard angiography, depending on the level of the lesion. At times it showed vessel flow where occlusion was shown on conventional angiogram. 3-D MRA in peripheral vascular disease is challenging the gold standard, but is inconsistent at present.
Resumo:
Study objectives: Currently, esophageal pressure monitoring is the "gold standard" measure for inspiratory efforts, hut its invasive nature necessitates a better tolerated and noninvasive method to be used on children. Pulse transit time (PTT) has demonstrated its potential as a noninvasive surrogate marker for inspiratory efforts. The principle velocity determinant of PTT is the change in stiffness of the arterial wall and is inversely correlated to BP. Moreover, PTT has been shown to identify changes in inspiratory effort via the BP fluctuations induced by negative pleural pressure swings. In this study, the capability of PTT to classify respiratory, events during sleep as either central or obstructive in nature was investigated. Setting and participants: PTT measure was used in adjunct to routine overnight polysomnographic studies performed on 33 children (26 boys and 7 girls; mean +/- SD age, 6.7 +/- 3.9 years). The accuracy of PTT measurements was then evaluated against scored corresponding respiratory events in the polysomnography recordings. Results: Three hundred thirty-four valid respiratory events occurred and were analyzed. One hundred twelve obstructive events (OEs) showed a decrease in mean PTT over a 10-sample window that had a probability of being correctly ranked below the baseline PTT during tidal breathing of 0.92 (p < 0.005); 222 central events (CEs) showed a decrease in the variance of PTT over a 10-sample window that had a probability of being ranked below the baseline PTT of 0.94 (p < 0.005). This indicates that, at a sensitivity of 0.90, OEs can be detected with a specificity of 0.82 and CEs can be detected with a specificity of 0.80. Conclusions: PTT is able to categorize CEs and OEs accordingly in the absence of motion artifacts, including hypopneas. Hence, PTT shows promise to differentiate respiratory, events accordingly and can be an important diagnostic tool in pediatric respiratory sleep studies.< 0.005); 222 central events (CEs) showed a decrease in the variance of PTT over a 10-sample window that had a probability of being ranked below the baseline PTT of 0.94 (p < 0.005). This indicates that, at a sensitivity of 0.90, OEs can be detected with a specificity of 0.82 and CEs can be detected with a specificity of 0.80. Conclusions: PTT is able to categorize CEs and OEs accordingly in the absence of motion artifacts, including hypopneas. Hence, PTT shows promise to differentiate respiratory, events accordingly and can be an important diagnostic tool in pediatric respiratory sleep studies.');"
Resumo:
With growing success in experimental implementations it is critical to identify a gold standard for quantum information processing, a single measure of distance that can be used to compare and contrast different experiments. We enumerate a set of criteria that such a distance measure must satisfy to be both experimentally and theoretically meaningful. We then assess a wide range of possible measures against these criteria, before making a recommendation as to the best measures to use in characterizing quantum information processing.
Resumo:
Community responses (n = 925, response rate = 71%) of a series of eight photographs of pigmented skin lesions were compared against those of general practitioners (n = 114, response rate = 77%), considered to be the most relevant gold standard. The eight photographs included three melanomas, two potentially malignant lesions and three benign pigmented lesions. Over the pool of lesions examined, the average probability that community members thought a lesion was likely to be skin cancer (0.68 [99% CI = 0.66-0.69]) was higher (p < 0.0001) than that of the comparison general practitioners 0.58 [99% CI = 0.55-0.62]. This reflects a general (but not consistent) inflated propensity to over-diagnose among community members. The average probability that respondents indicated they would seek medical advice for a lesion was 0.71 [99% CI = 0.70-0.73]. As expected, this was strongly associated with their perceptions of the skin lesion. These results suggest that the community can play a valuable role in assessing the need for medical evaluation of pigmented skin lesions. (c) 2004 International Society for Preventive Oncology. Published by Elsevier Ltd. All rights reserved.