9 resultados para Objective Monitoring
em Aston University Research Archive
Resumo:
Purpose. To convert objective image analysis of anterior ocular surfaces into recognisable clinical grades, in order to provide a more sensitive and reliable equivalent to current subjective grading methods; a prospective, randomized study correlating clinical grading with digital image assessment. Methods. The possible range of clinical presentations Of bulbar and palpebral hyperaemia, palpebral roughness and corneal staining were represented by 4 sets of 10 images. The images were displayed in random order and graded by 50 clinicians using both subjective CCLRU and Efron grading scales. Previously validated objective image analysis was performed 3 times oil each of the 40 images. Digital measures included edge-detection and relative-coloration components. Step-wise regression analysis determined correlations between the average subjective grade and the objective image analysis measures. Results. Average subjective grades Could be predicted by a combination of the objective image analysis components. These digital ``grades'' accounted for between 69%, (for Efron scale-graded palpebral redness) and 98% (for Efron scale-graded bulbar hyperaemia) of the subjective variance. Conclusions. The results indicate that clinicians may use a combination of vessel areas and overall hue in their judgment of clinical severity for certain conditions. Objective grading call take these aspects into account, and be used to predict an average ``objective grade'' to be used by a clinician in describing the anterior eye. These measures are more sensitive and reliable than subjective grading while still utilizing familiar terminology, and can be applied in research or practice to improve the detection, and monitoring of ocular surface changes.
Resumo:
Objective: To explore views of patients with type 2 diabetes about self monitoring of blood glucose over time. Design: Longitudinal, qualitative study. Setting: Primary and secondary care settings across Lothian, Scotland. Participants: 18 patients with type 2 diabetes. Main outcome measures: Results from repeat in-depth interviews with patients over four years after clinical diagnosis. Results: Analysis revealed three main themes - the role of health professionals, interpreting readings and managing high values, and the ongoing role of blood glucose self monitoring. Self monitoring decreased over time, and health professionals' behaviour seemed crucial in this: participants interpreted doctors' focus on levels of haemoglobin A1c, and lack of perceived interest in meter readings, as indicating that self monitoring was not worth continuing. Some participants saw readings as a proxy measure of good and bad behaviour - with women especially, chastising themselves when readings were high. Some participants continued to find readings difficult to interpret, with uncertainty about how to respond to high readings. Reassurance and habit were key reasons for continuing. There was little indication that participants were using self monitoring to effect and maintain behaviour change. Conclusions: Clinical uncertainty about the efficacy and role of blood glucose self monitoring in patients with type 2 diabetes is mirrored in patients' own accounts. Patients tended not to act on their self monitoring results, in part because of a lack of education about the appropriate response to readings. Health professionals should be explicit about whether and when such patients should self monitor and how they should interpret and act upon the results, especially high readings.
Resumo:
Objective: To assess and explain deviations from recommended practice in National Institute for Clinical Excellence (NICE) guidelines in relation to fetal heart monitoring. Design: Qualitative study. Setting: Large teaching hospital in the UK. Sample: Sixty-six hours of observation of 25 labours and interviews with 20 midwives of varying grades. Methods: Structured observations of labour and semistructured interviews with midwives. Interviews were undertaken using a prompt guide, audiotaped, and transcribed verbatim. Analysis was based on the constant comparative method, assisted by QSR N5 software. Main outcome measures: Deviations from recommended practice in relation to fetal monitoring and insights into why these occur. Results: All babies involved in the study were safely delivered, but 243 deviations from recommended practice in relation to NICE guidelines on fetal monitoring were identified, with the majority (80%) of these occurring in relation to documentation. Other deviations from recommended practice included indications for use of electronic fetal heart monitoring and conduct of fetal heart monitoring. There is evidence of difficulties with availability and maintenance of equipment, and some deficits in staff knowledge and skill. Differing orientations towards fetal monitoring were reported by midwives, which were likely to have impacts on practice. The initiation, management, and interpretation of fetal heart monitoring is complex and distributed across time, space, and professional boundaries, and practices in relation to fetal heart monitoring need to be understood within an organisational and social context. Conclusion: Some deviations from best practice guidelines may be rectified through straightforward interventions including improved systems for managing equipment and training. Other deviations from recommended practice need to be understood as the outcomes of complex processes that are likely to defy easy resolution. © RCOG 2006.
Resumo:
Deep hole drilling is one of the most complicated metal cutting processes and one of the most difficult to perform on CNC machine-tools or machining centres under conditions of limited manpower or unmanned operation. This research work investigates aspects of the deep hole drilling process with small diameter twist drills and presents a prototype system for real time process monitoring and adaptive control; two main research objectives are fulfilled in particular : First objective is the experimental investigation of the mechanics of the deep hole drilling process, using twist drills without internal coolant supply, in the range of diarneters Ø 2.4 to Ø4.5 mm and working length up to 40 diameters. The definition of the problems associated with the low strength of these tools and the study of mechanisms of catastrophic failure which manifest themselves well before and along with the classic mechanism of tool wear. The relationships between drilling thrust and torque with the depth of penetration and the various machining conditions are also investigated and the experimental evidence suggests that the process is inherently unstable at depths beyond a few diameters. Second objective is the design and implementation of a system for intelligent CNC deep hole drilling, the main task of which is to ensure integrity of the process and the safety of the tool and the workpiece. This task is achieved by means of interfacing the CNC system of the machine tool to an external computer which performs the following functions: On-line monitoring of the drilling thrust and torque, adaptive control of feed rate, spindle speed and tool penetration (Z-axis), indirect monitoring of tool wear by pattern recognition of variations of the drilling thrust with cumulative cutting time and drilled depth, operation as a data base for tools and workpieces and finally issuing of alarms and diagnostic messages.
Resumo:
This thesis set out to develop an objective analysis programme that correlates with subjective grades but has improved sensitivity and reliability in its measures so that the possibility of early detection and reliable monitoring of changes in anterior ocular surfaces (bulbar hyperaemia, palpebral redness, palpebral roughness and corneal straining) could be increased. The sensitivity of the program was 20x greater than subjective grading by optometrists. The reliability was found to be optimal (r=1.0) with subjective grading up to 144x more variable (r=0.08). Objective measures were used to create formulae for an overall ‘objective-grade’ (per surface) equivalent to those displayed by the CCLRU or Efron scales. The correlation between the formulated objective verses subjective grades was high, with adjusted r2 up to 0.96. Determination of baseline levels of objective grade were investigated over four age groups (5-85years n= 120) so that in practice a comparison against the ‘normal limits’ could be made. Differences for bulbar hyperaemia were found between the age groups (p<0.001), and also for palpebral redness and roughness (p<0.001). The objective formulae were then applied to the investigation of diurnal variation in order to account for any change that may affect the baseline. Increases in bulbar hyperaemia and palpebral redness were found between examinations in the morning and evening. Correlation factors were recommended. The program was then applied to clinical situations in the form of a contact lens trial and an investigation into iritis and keratoconus where it successfully recognised various surface changes. This programme could become a valuable tool, greatly improving the chances of early detection of anterior ocular abnormalities, and facilitating reliable monitoring of disease progression in clinical as well as research environments.
Resumo:
Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.
Resumo:
Aims: To establish the sensitivity and reliability of objective image analysis in direct comparison with subjective grading of bulbar hyperaemia. Methods: Images of the same eyes were captured with a range of bulbar hyperaemia caused by vasodilation. The progression was recorded and 45 images extracted. The images were objectively analysed on 14 occasions using previously validated edge-detection and colour-extraction techniques. They were also graded by 14 eye-care practitioners (ECPs) and 14 non-clinicians (NCb) using the Efron scale. Six ECPs repeated the grading on three separate occasions Results: Subjective grading was only able to differentiate images with differences in grade of 0.70-1.03 Efron units (sensitivity of 0.30-0.53), compared to 0,02-0.09 Efron units with objective techniques (sensitivity of 0.94-0.99). Significant differences were found between ECPs and individual repeats were also inconsistent (p<0.001). Objective analysis was 16x more reliable than subjective analysis. The NCLs used wider ranges of the scale but were more variable than ECPs, implying that training may have an effect on grading. Conclusions: Objective analysis may offer a new gold standard in anterior ocular examination, and should be developed further as a clinical research tool to allow more highly powered analysis, and to enhance the clinical monitoring of anterior eye disease.
Resumo:
Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Background: Remote, non-invasive and objective tests that can be used to support expert diagnosis for Parkinson's disease (PD) are lacking. Methods: Participants underwent baseline in-clinic assessments, including the Unified Parkinson's Disease Rating Scale (UPDRS), and were provided smartphones with an Android operating system that contained a smartphone application that assessed voice, posture, gait, finger tapping, and response time. Participants then took the smart phones home to perform the five tasks four times a day for a month. Once a week participants had a remote (telemedicine) visit with a Parkinson disease specialist in which a modified (excluding assessments of rigidity and balance) UPDRS performed. Using statistical analyses of the five tasks recorded using the smartphone from 10 individuals with PD and 10 controls, we sought to: (1) discriminate whether the participant had PD and (2) predict the modified motor portion of the UPDRS. Results: Twenty participants performed an average of 2.7 tests per day (68.9% adherence) for the study duration (average of 34.4 days) in a home and community setting. The analyses of the five tasks differed between those with Parkinson disease and those without. In discriminating participants with PD from controls, the mean sensitivity was 96.2% (SD 2%) and mean specificity was 96.9% (SD 1.9%). The mean error in predicting the modified motor component of the UPDRS (range 11-34) was 1.26 UPDRS points (SD 0.16). Conclusion: Measuring PD symptoms via a smartphone is feasible and has potential value as a diagnostic support tool.