955 resultados para gold standard
Resumo:
Objectives: The study was designed to show the validity and reliability of scoring the Physical Mobility Scale (PMS). PMS was developed by physiotherapists working in residential aged care to specifically show resident functional mobility and to provide information regarding each resident's need for supervision or assistance from one or two staff members and equipment during position changes, transfers, mobilising and personal care. Methods: Nineteen physiotherapists of varying backgrounds and experience scored the performances of nine residents of care facilities from video recordings. The performances were compared to scores on two 'gold standard' assessment tools. Four of the physiotherapists repeated the evaluations. Results: The PAIS showed excellent content validity and reliability. Conclusions: The PAIS provides graded performance of physical mobility, including level of dependency on staff and equipment. This is a major advantage over existing functional assessment tools. There is no need for specific training for physiotherapists to use the tool.
Resumo:
Aim: Polysomnography (PSG) is the current standard protocol for sleep disordered breathing (SDB) investigation in children. Presently, there are limited reliable screening tests for both central (CE) and obstructive (OE) respiratory events. This study compared three indices, derived from pulse oximetry and electrocardiogram ( ECG), with the PSG gold standard. These indices were heart rate (HR) variability, arterial blood oxygen de-saturation (SaO(2)) and pulse transit time (PTT). Methods: 15 children (12 male) from routine PSG studies were recruited (aged 3 - 14 years). The characteristics of the three indices were based on known criteria for respiratory events (RPE). Their estimation singly and in combination was evaluated with simultaneous scored PSG recordings. Results: 215 RPE and 215 tidal breathing events were analysed. For OE, the obtained sensitivity was HR (0.703), SaO(2) (0.047), PTT (0.750), considering all three indices (0) and either of the indices (0.828) while specificity was (0.891), (0.938), (0.922), (0.953) and (0.859) respectively. For CE, the sensitivity was HR (0.715), SaO(2) (0.278), PTT (0.662), considering all indices (0.040) and either of the indices (0.868) while specificity was (0.815), (0.954), (0.901), (0.960) and (0.762) accordingly. Conclusions: Preliminary findings herein suggest that the later combination of these non-invasive indices to be a promising screening method of SDB in children.
Resumo:
Background. We describe the development, reliability and applications of the Diagnostic Interview for Psychoses (DIP), a comprehensive interview schedule for psychotic disorders. Method. The DIP is intended for use by interviewers with a clinical background and was designed to occupy the middle ground between fully structured, lay-administered schedules, and semi-structured., psychiatrist-administered interviews. It encompasses four main domains: (a) demographic data; (b) social functioning and disability; (c) a diagnostic module comprising symptoms, signs and past history ratings; and (d) patterns of service utilization Lind patient-perceived need for services. It generates diagnoses according to several sets of criteria using the OPCRIT computerized diagnostic algorithm and can be administered either on-screen or in a hard-copy format. Results. The DIP proved easy to use and was well accepted in the field. For the diagnostic module, inter-rater reliability was assessed on 20 cases rated by 24 clinicians: good reliability was demonstrated for both ICD-10 and DSM-III-R diagnoses. Seven cases were interviewed 2-11 weeks apart to determine test-retest reliability, with pairwise agreement of 0.8-1.0 for most items. Diagnostic validity was assessed in 10 cases, interviewed with the DIP and using the SCAN as 'gold standard': in nine cases clinical diagnoses were in agreement. Conclusions. The DIP is suitable for use in large-scale epidemiological studies of psychotic disorders. as well as in smaller Studies where time is at a premium. While the diagnostic module stands on its own, the full DIP schedule, covering demography, social functioning and service utilization makes it a versatile multi-purpose tool.
Resumo:
OBJECTIVE: To compare the accuracy, costs and utility of using the National Death Index (NDI) and state-based cancer registries in determining the mortality status of a cohort of women diagnosed with ovarian cancer in the early 1990s. METHODS: As part of a large prognostic study, identifying information on 822 women diagnosed with ovarian cancer between 1990 and 1993, was simultaneously submitted to the NDI and three state-based cancer registries to identify deceased women as of June 30, 1999. This was compared to the gold standard of "definite deaths". A comparative evaluation was also made of the time and costs associated with the two methods. RESULTS: Of the 450 definite deaths in our cohort the NDI correctly identified 417 and all of the 372 women known to be alive (sensitivity 93%, specificity 100%). Inconsistencies in identifiers recorded in our cohort files, particularly names, were responsible for the majority of known deaths not matching with the NDI, and if eliminated would increase the sensitivity to 98%. The cancer registries correctly identified 431 of the 450 definite deaths (sensitivity 96%). The costs associated with the NDI search were the same as the cancer registry searches, but the cancer registries took two months longer to conduct the searches. CONCLUSIONS AND IMPLICATIONS: This study indicates that the cancer registries are valuable, cost effective agencies for follow-up of mortality outcome in cancer cohorts, particularly where cohort members were residents of those states. For following large national cohorts the NDI provides additional information and flexibility when searching for deaths in Australia. This study also shows that women can be followed up for mortality with a high degree of accuracy using either service. Because each service makes a valuable contribution to the identification of deceased cancer subjects, both should be considered for optimal mortality follow-up in studies of cancer patients.
Resumo:
Background and Objective: To describe the diagnostic accuracy and practical application of the Peter James Centre Falls Risk Assessment Tool (PJC-FRAT), a multidisciplinary falls risk screening and intervention deployment instrument. Methods: In phase 1, the accuracy of the PJC-FRAT was prospectively compared to a gold standard (the STRATIFY) on a cohort of subacute hospital patients (n = 122). In phase 2, the PJC-FRAT was temporally reassessed using a subsequent cohort (n = 316), with results compared to those of phase 1. Primary outcomes were falls (events), fallers (patients who fell), and hospital completion rates of the PJC-FRAT. Results: In phase 1, PJC-FRAT accuracy of identifying falters showed sensitivity of 73% (bootstrap 95% confidence interval CI = 55, 90) and specificity of 75% (95% CI = 66, 83), compared with the STRATIFY (cutoff >= 2/5) sensitivity of 77% (95% CI = 59, 92) and specificity of 51% (95% CI = 41, 61). This difference was not significant. In phase 2, accuracy of nursing staff using the PJC-FRAT was lower. PJC-FRAT completion rates varied among disciplines over both phases: nurses and physiotherapists, >= 90%; occupational therapists, >= 82%; and medical officers, >= 57%. Conclusion: The PJC-FRAT was practical and relatively accurate as a predictor of falls and a deployment instrument for falls prevention interventions, although continued staff education may be necessary to maintain its accuracy. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Studies have shown that an increase in arterial stiffening can indicate the presence of cardiovascular diseases like hypertension. Current gold standard in clinical practice is by measuring the blood pressure of patients using a mercury sphygmomanometer. However, the nature of this technique is not suitable for prolonged monitoring. It has been established that pulse wave velocity is a direct measure of arterial stiffening. However, its usefulness is hampered by the absence of techniques to estimate it non-invasively. Pulse transit time (PTT) is a simple and non-intrusive method derived from pulse wave velocity. It has shown its capability in childhood respiratory sleep studies. Recently, regression equations that can predict PTT values for healthy Caucasian children were formulated. However, its usefulness to identify hypertensive children based on mean PTT values has not been investigated. This was a continual study where 3 more Caucasian male children with known clinical hypertension were recruited. Results indicated that the PTT predictive equations are able to identify hypertensive children from their normal counterparts in a significant manner (p < 0.05). Hence, PTT can be a useful diagnostic tool in identifying hypertension in children and shows potential to be a non-invasive continual monitor for arterial stiffening.
Resumo:
Well understood methods exist for developing programs from given specifications. A formal method identifies proof obligations at each development step: if all such proof obligations are discharged, a precisely defined class of errors can be excluded from the final program. For a class of closed systems such methods offer a gold standard against which less formal approaches can be measured. For open systems -those which interact with the physical world- the task of obtaining the program specification can be as challenging as the task of deriving the program. And, when a system of this class must tolerate certain kinds of unreliability in the physical world, it is still more challenging to reach confidence that the specification obtained is adequate. We argue that widening the notion of software development to include specifying the behaviour of the relevant parts of the physical world gives a way to derive the specification of a control system and also to record precisely the assumptions being made about the world outside the computer.
Resumo:
The fundamental failure of current approaches to ontology learning is to view it as single pipeline with one or more specific inputs and a single static output. In this paper, we present a novel approach to ontology learning which takes an iterative view of knowledge acquisition for ontologies. Our approach is founded on three open-ended resources: a set of texts, a set of learning patterns and a set of ontological triples, and the system seeks to maintain these in equilibrium. As events occur which disturb this equilibrium, actions are triggered to re-establish a balance between the resources. We present a gold standard based evaluation of the final output of the system, the intermediate output showing the iterative process and a comparison of performance using different seed input. The results are comparable to existing performance in the literature.
Resumo:
Clostridium difficile is at present one of the most common nosocomial infections in the developed world. Hypervirulent strains (PCR ribotype 027) of C. difficile which produce enhanced levels of toxins have also been associated with other characteristics such as a greater rate of sporulation and resistance to fluoroquinolones. Infection due to C. difficile PCR ribotype 027 has also been associated with greater rates of morbidity and mortality. The aim of this thesis was to investigate both the phenotypic and genotypic characteristics of two populations of toxigenic clinical isolates of C. difficile which were recovered from two separate hospital trusts within the UK. Phenotypic characterisation of the isolates was undertaken using analytical profile indexes (APIs), minimum inhibitory concentrations(MICs) and S-layer protein typing. In addition to this, isolates were also investigated for the production of a range of extracellular enzymes as potential virulence factors. Genotypic characterisation was performed using a random amplification of polymorphic DNA(RAPD) PCR protocol which was fully optimised in this study, and the gold standard method, PCR ribotyping. The discriminatory power of both methods was compared and the similarity between the different isolates also analysed. Associations between the phenotypic and genotypic characteristics and the recovery location of the isolate were then investigated. Extracellular enzyme production and API testing revealed little variation between the isolates; with S-layer typing demonstrating low discrimination. Minimum inhibitory concentrations did not identify any resistance towards either vancomycin or metronidazole; there were however significant differences in the distribution of antibiogram profiles of isolates recovered from the two different trusts. The RAPD PCR protocol was successfully optimised and alongside PCR ribotyping, effectively typed all of the clinical isolates and also identified differences in the number of types defined between the two locations. Both PCR ribotyping and RAPD demonstrated similar discriminatory power; however, the two genotyping methods did not generate amplicons that mapped directly onto each other and therefore clearly characterised isolates based on different genomic markers. The RAPD protocol also identified different subtypes within PCR ribotypes, therefore demonstrating that all isolates defined as a particular PCR ribotype were not the same strain. No associations could be demonstrated between the phenotypic and genotypic characteristics observed; however, the location from which an isolate was recovered did appear to influence antibiotic resistance and genotypic characteristics. The phenotypic and genotypic characteristics observed amongst the C. difficile isolates in this study, may provide a basis for the identification of further targets which may be potentially incorporated into future methods for the characterisation of C. difficile isolates.
Resumo:
The work present in this thesis was aimed at assessing the efficacy of lithium in the acute treatment of mania and for the prophylaxis of bipolar disorder, and investigating the value of plasma haloperidol concentration for predicting response to treatment in schizophrenia. The pharmacogenetics of psychotropic drugs is critically appraised to provide insights into interindividual variability in response to pharmacotherapy, In clinical trials of acute mania, a number of measures have been used to assess the severity of illness and its response to treatment. Rating instruments need to be validated in order for a clinical study to provide reliable and meaningful estimates of treatment effects, Eight symptom-rating scales were identified and critically assessed, The Mania Rating Scale (MRS) was the most commonly used for assessing treatment response, The advantage of the MRS is that there is a relatively extensive database of studies based on it and this will no doubt ensure that it remains a gold standard for the foreseeable future. Other useful rating scales are available for measuring mania but further cross-validation and validation against clinically meaningful global changes are required. A total of 658 patients from 12 trials were included in an evaluation of the efficacy of lithium in the treatment of acute mania. Treatment periods ranged from 3 to 4 weeks. Efficacy was estimated using (i) the differences in the reduction in mania severity scores, and (ii) the ratio and difference in improvement response rates. The response rate ratio for lithium against placebo was 1.95 (95% CI 1.17 to 3.23). The mean number needed to treat was 5 (95% CI 3 to 20). Patients were twice as likely to obtain remission with lithium than with chlorpromazine (rate ratio = 1.96, 95% CI 1.02 to 3.77). The mean number needed to treat (NNT) was 4 (95% CI 3 to 9). Neither carbamazepine nor valproate was more effective than lithium. The response rate ratios were 1.01 (95% CI 0.54 to 1.88) for lithium compared to carbarnazepine and 1.22 (95% CI 0.91 to 1.64) for lithium against valproate. Haloperidol was no better than lithium on the basis of improvement based on assessment of global severity. The differences in effects between lithium and risperidone were -2.79 (95% CI -4.22 to -1.36) in favour of risperidone with respect to symptom severity improvement and -0.76 (95% CI -1.11 to -0,41) on the basis of reduction in global severity of disease. Symptom and global severity was at least as well controlIed with lithium as with verapamil. Lithium caused more side-effects than placebo and verapamil, but no more than carbamazepine or valproate. A total of 554 patients from 13 trials were included in the statistical analysis of lithium's efficacy in the prophylaxis of bipolar disorder. The mean follow-up period was 5-34 months. The relapse risk ratio for lithium versus placebo was 0.47 (95% CI 0.26 to 0.86) and the NNT was 3 (95% CI 2 to 7). The relapse risk ratio for lithium versus imipramine was 0.62 (95% CI 0.46 to 0.84) and the NNT was 4 (951% Cl 3 to 7), The combination of lithium and imipramine was no more effective than lithium alone. The risk of relapse was greater with lithium alone than with the lithium-divalproate combination. A risk difference of 0.60 (95% CI 0.21 to 0.99) and an NNT of 2 (95% CI 1 to 5) were obtained. Lithium was as effective as carbamazepine. Based on individual data concerning plasma haloperidol concentration and percent improvement in psychotic symptoms, our results suggest an acceptable concentration range of 11.20-30.30 ng/mL A minimum of 2 weeks should be allowed before evaluating therapeutic response. Monitoring of drug plasma levels seems not to be necessary unless behavioural toxicity or noncompliance is suspected. Pharmacokinetics and pharmacodynamics, which are mainly determined by genetic factors, contribute to interindividual and interethnic variations in clinical response to drugs. These variations are primarily due to differences in drug metabolism. Variability in pharmacokinetics of a number of drugs is associated with oxidation polymorphism. Debrisoquine/sparteine hydroxylase (CYP2D6) and the S-mephenytoin hydroxylase (CYP2C19) are polymorphic P450 enzymes with particular importance in psychopharmacotherapy. The enzymes are responsible for the metabolism of many commonly used antipsychotic and antidepressant drugs. The incidence of poor metabolisers of debrisoquine and S-mephenytoin varies widely among populations. Ethnic variations in polymorphic isoenzymes may, at least in part, explain ethnic differences in response to pharmacotherapy of antipsychotics and antidepressant drugs.
Resumo:
Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.
Resumo:
Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.
Resumo:
Objective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique. Methods and patients: Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes), and glaucoma suspect patients (38 eyes). All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis protocol: the hemifield sector analysis protocol. Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P<0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group). The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P<0.001), statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P<0.01), and only 1/11 pair was statistically significant (t-test P<0.9). The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86% respectively and 89% and 79% in glaucoma suspects. These results showed that the new analysis protocol was able to confirm existing visual field defects detected by standard perimetry, was able to differentiate between the three study groups with a clear distinction between normal patients and those with suspected glaucoma, and was able to detect early visual field changes not detected by standard perimetry. In addition, the distinction between normal and glaucoma patients was especially clear and significant using this analysis. Conclusion: The new hemifield sector analysis protocol used in mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol, it can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. The sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucomatous visual field loss. The intersector analysis protocol can detect early field changes not detected by the standard Humphrey Field Analyzer test. © 2013 Mousa et al, publisher and licensee Dove Medical Press Ltd.
Resumo:
Aim To assess the accuracy and reproducibility of biometry undertaken with the Aladdin (Topcon, Tokyo, Japan) in comparison with the current gold standard device, the IOLMaster 500 (Zeiss, Jena, Germany). Setting University Eye Clinic, Birmingham, UK and Refractive Surgery Centre, Kiel, Germany. Methods The right eye of 75 patients with cataracts and 22 healthy participants were assessed using the two devices. Measurements of axial length (AL), anterior chamber depth (ACD) and keratometry (K) were undertaken with the Aladdin and IOLMaster 500 in random order by an experienced practitioner. A second practitioner then obtained measurements for each participant using the Aladdin biometer in order to assess interobserver variability. Results No statistically significant differences ( p≥0.05) between the two biometers were found for average difference (AL)±95% CI=0.01±0.06 mm), ACD (0.00 ±0.11 mm) or mean K values (0.08±0.51 D). Furthermore, interobserver variability was very good for each parameter (weighted κ≥0.85). One patient's IOL powers could not be calculated with either biometer measurements, whereas a further three could not be analysed by the IOLMaster 500. The IOL power calculated from the valid measurements was not statistically significantly different between the biometers (p=0.842), with 91% of predictions within±0.25 D. Conclusions The Aladdin is a quick, easy-to-use biometer that produces valid and reproducible results that are comparable with those obtained with the IOLMaster 500.
Resumo:
Acute life-threatening events are mostly predictable in adults and children. Despite real-time monitoring these events still occur at a rate of 4%. This paper describes an automated prediction system based on the feature space embedding and time series forecasting methods of the SpO2 signal; a pulsatile signal synchronised with heart beat. We develop an age-independent index of abnormality that distinguishes patient-specific normal to abnormal physiology transitions. Two different methods were used to distinguish between normal and abnormal physiological trends based on SpO2 behaviour. The abnormality index derived by each method is compared against the current gold standard of clinical prediction of critical deterioration. Copyright © 2013 Inderscience Enterprises Ltd.