967 resultados para Risk Detection
                                
Resumo:
Background: Estimates of the performance of carbohydrate deficient transferrin (CDT) and gamma glutamyltransferase (GGT) as markers of alcohol consumption have varied widely. Studies have differed in design and subject characteristics. The WHO/ISBRA Collaborative Study allows assessment and comparison of CDT, GGT, and aspartate aminotransferase (AST) as markers of drinking in a large, well-characterized, multicenter sample. Methods: A total of 1863 subjects were recruited from five countries (Australia, Brazil, Canada, Finland, and Japan). Recruitment was stratified by alcohol use, age, and sex. Demographic characteristics, alcohol consumption, and presence of ICD-10 dependence were recorded using an interview schedule based on the AUDADIS, CDT was assayed using CDTect(TM) and GGT and AST by standard methods. Statistical techniques included receiver operating characteristic (ROC) analysis. Multiple regression was used to measure the impact of factors other than alcohol on test performance. Results: CDT and GGT had comparable performance on ROC analysis, with AST performing slightly less well. CDT was a slightly but significantly better marker of high-risk consumption in men. All were more effective for detection of high-risk rather than intermediate-risk drinking. CDT and GGT levels were influenced by body mass index, sex, age, and smoking status. Conclusions: CDT was little better than GGT in detecting high- or intermediate-risk alcohol consumption in this large, multicenter, predominantly community-based sample. As the two tests are relatively independent of each other, their combination is likely to provide better performance than either test alone, Test interpretation should take account sex, age. and body mass index.
                                
Resumo:
Cervical auscultation is in the process of gaining clinical credibility. In order for it to be accepted by the clinical community, the procedure and equipment used must first be standardized. Takahashi et al. [Dysphagia 9:54-62, 1994] attempted to provide benchmark methodology for administering cervical auscultation. They provided information about the acoustic detector unit best suited to picking up swallowing sounds and the best cervical site to place it. The current investigation provides contrasting results to Takahashi et al. with respect to the best type of acoustic detector unit to use for detecting swallowing sounds. Our study advocates an electret microphone as opposed to an accelerometer for recording swallowing sounds. However, we agree on the optimal placement site. We conclude that cervical auscultation is within reach of the average dysphagia clinic.
                                
Resumo:
The development of secondary arm lymphoedema after the removal of axillary lymph nodes remains a potential problem for women with breast cancer. This study investigated the incidence of arm lymphoedema following axillary dissection to determine the effect of prospective monitoring and early physiotherapy intervention. Sixty-five women were randomly assigned to either the treatment (TG) or control group (CG) and assessments were made preoperatively, at day 5 and at 1, 3, 6, 12 and 24 months postoperatively. Three measurements were used for the detection of arm lymphoedema: arm circumferences (CIRC), arm volume (VOL) and multi-frequency bioimpedance (MFBIA). Clinically significant lymphoedema was confirmed by an increase of at least 200 ml from the preoperative difference between the two arms. Using this definition, the incidence of lymphoedema at 24 mo. was 21%, with a rate of 11% in the TG compared to 30% in the CG. The CIRC or MFBIA methods failed to detect lymphoedema in up to 50% of women who demonstrated an increase of at least 200 ml in the VOL of the operated arm compared to the unoperated arm. The physiotherapy intervention programme for the TG women included principles for lymphoedema risk minimisation and early management of this condition when it was identified. These strategies appear to reduce the development of secondary lymphoedema and alter its progression in comparison to the CG women. Monitoring of these women is continuing and will determine if these benefits are maintained over a longer period for women with early lymphoedema after breast cancer surgery.
                                
Resumo:
Objectives: To compare variability of blood glucose concentration in patients with type II diabetes with (cases) and without (controls) myocardial infarction. A secondary objective was identification of predictive factors for higher blood glucose on discharge from hospital. Design: A retrospective matched case-control study. Participants: Medical notes of 101 type II diabetic patients admitted with a myocardial infarction (MI) and 101 type II diabetic patients (controls) matched on gender and age with no MI were reviewed. Blood glucose concentrations over two consecutive 48-h periods were collected. Demographic data and therapy on admission/discharge were also collected. Results: Patient characteristics were comparable on recruitment excluding family history of cardiovascular disease (P =0.003), dyslipidaemia (P =0.004) and previous history of MI (P =0.007). Variability of blood glucose in cases was greater over the first 48 h compared with the second 48 h (P =0.03), and greater when compared with controls over the first 48 h (P =0.01). Cases with blood glucose on discharge >8.2 mmol / L (n =45) were less likely to have a history of previous MI (P =0.04), ischaemic heart disease (P =0.03) or hypertension (P =0.02). Conclusions: Type II diabetics with an MI have higher and more variable blood glucose concentrations during the first 48 h of admission. Only cardiovascular 'high risk' patients had target blood glucose set on discharge. The desirability of all MI patients with diabetes, having standardized-glucose infusions to reduce variability of blood glucose, should be evaluated in a randomized controlled trial.
                                
                                
                                
Quantification and assessment of fault uncertainty and risk using stochastic conditional simulations
                                
                                
Resumo:
Objective: To describe new measures of risk from case-control and cohort studies, which are simple to understand and relate to numbers of the population at risk. Design: Theoretical development of new measures of risk. Setting: Review of literature and previously described measures. Main results: The new measures are: (1) the population impact number (PIN), the number of those in the whole population among whom one case is attributable to the exposure or risk factor (this is equivalent to the reciprocal of the population attributable risk),- (2) the case impact number (CIN) the number of people with the disease or outcome for whom one case will be attributable to the exposure or risk factor (this is equivalent to the reciprocal of the population attributable fraction); (3) the exposure impact number (EIN) the number of people with the exposure among whom one excess case is attributable to the exposure (this is equivalent to the reciprocal of the attributable risk); (4) the exposed cases impact number (ECIN) the number of exposed cases among whom one case is attributable to the exposure (this is equivalent to the reciprocal of the aetiological fraction). The impact number reflects the number of people in each population (the whole population, the cases, all those exposed, and the exposed cases) among whom one case is attributable to the particular risk factor. Conclusions: These new measures should help communicate the impact on a population, of estimates of risk derived from cohort or case-control studies.
                                
                                
                                
Resumo:
The flock-level sensitivity of pooled faecal culture and serological testing using AGID for the detection of ovine Johne's disease-infected flocks were estimated using non-gold-standard methods. The two tests were compared in an extensive field trial in 296 flocks in New South Wales during 1998. In each flock, a sample of sheep was selected and tested for ovine Johne's disease using both the AGID and pooled faecal culture. The flock-specificity of pooled faecal culture also was estimated from results of surveillance and market-assurance testing in New South Wales. The overall flock-sensitivity of pooled faecal culture was 92% (95% CI: 82.4 and 97.4%) compared to 61% (50.5 and 70.9%) for serology (assuming that both tests were 100% specific). In low-prevalence flocks (estimated prevalence