966 resultados para objective monitoring
Resumo:
In wildlife management, the program of monitoring will depend on the management objective. If the objective is damage mitigation, then ideally it is damage that should be monitored. Alternatively, population size (N) can be used as a surrogate for damage, but the relationship between N and damage obviously needs to be known. If the management objective is a sustainable harvest, then the system of monitoring will depend on the harvesting strategy. In general, the harvest strategy in all states has been to offer a quota that is a constant proportion of population size. This strategy has a number of advantages over alternative strategies, including a low risk of over- or underharvest in a stochastic environment, simplicity, robustness to bias in population estimates and allowing harvest policy to be proactive rather than reactive. However, the strategy requires an estimate of absolute population size that needs to be made regularly for a fluctuating population. Trends in population size and in various harvest statistics, while of interest, are secondary. This explains the large research effort in further developing accurate estimation methods for kangaroo populations. Direct monitoring on a large scale is costly. Aerial surveys are conducted annually at best, and precision of population estimates declines with the area over which estimates are made. Management at a fine scale (temporal or spatial) therefore requires other monitoring tools. Indirect monitoring through harvest statistics and habitat models, that include rainfall or a greenness index from satellite imagery, may prove useful.
Resumo:
Biological wastewater treatment is a complex, multivariate process, in which a number of physical and biological processes occur simultaneously. In this study, principal component analysis (PCA) and parallel factor analysis (PARAFAC) were used to profile and characterise Lagoon 115E, a multistage biological lagoon treatment system at Melbourne Water's Western Treatment Plant (WTP) in Melbourne, Australia. In this study, the objective was to increase our understanding of the multivariate processes taking place in the lagoon. The data used in the study span a 7-year period during which samples were collected as often as weekly from the ponds of Lagoon 115E and subjected to analysis. The resulting database, involving 19 chemical and physical variables, was studied using the multivariate data analysis methods PCA and PARAFAC. With these methods, alterations in the state of the wastewater due to intrinsic and extrinsic factors could be discerned. The methods were effective in illustrating and visually representing the complex purification stages and cyclic changes occurring along the lagoon system. The two methods proved complementary, with each having its own beneficial features. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Background Although both strength training (ST) and endurance training (ET) seem to be beneficial in type 2 diabetes mellitus (T2D), little is known about post-exercise glucose profiles. The objective of the study was to report changes in blood glucose (BG) values after a 4-month ET and ST programme now that a device for continuous glucose monitoring has become available. Materials and methods Fifteen participants, comprising four men age 56.5 +/- 0.9 years and 11 women age 57.4 +/- 0.9 years with T2D, were monitored with the MiniMed (Northridge, CA, USA) continuous glucose monitoring system (CGMS) for 48 h before and after 4 months of ET or ST. The ST consisted of three sets at the beginning, increasing to six sets per week at the end of the training period, including all major muscle groups and ET performed with an intensity of maximal oxygen uptake of 60% and a volume beginning at 15 min and advancing to a maximum of 30 min three times a week. Results A total of 17 549 single BG measurements pretraining (619.7 +/- 39.8) and post-training (550.3 +/- 30.1) were recorded, correlating to an average of 585 +/- 25.3 potential measurements per participant at the beginning and at the end of the study. The change in BG-value between the beginning (132 mg dL(-1)) and the end (118 mg dL(-1)) for all participants was significant (P = 0.028). The improvement in BG-value for the ST programme was significant (P = 0.02) but for the ET no significant change was measured (P = 0.48). Glycaemic control improved in the ST group and the mean BG was reduced by 15.6% (Cl 3-25%). Conclusion In conclusion, the CGMS may be a useful tool in monitoring improvements in glycaemic control after different exercise programmes. Additionally, the CGMS may help to identify asymptomatic hypoglycaemia or hyperglycaemia after training programmes.
Resumo:
Primary objective: To determine the profile of resolution of typical PTA behaviours and describe new learning and improvements in self-care during PTA. Research design: Prospective longitudinal study monitoring PTA status, functional learning and behaviours on a daily basis. Methods and procedures: Participants were 69 inpatients with traumatic brain injury who were in PTA. PTA was assessed using the Westmead or Oxford PTA assessments. Functional learning capability was assessed using a routine set of daily tasks and behaviour was assessed using an observational checklist. Data was analysed using descriptive statistics. Main outcomes and results: Challenging behaviours that are typically associated with PTA, such as agitation, aggression and wandering resolved in the early stages of PTA and incidence rates of these behaviours were less than 20%. Independence in self-care and bowel and bladder continence emerged later during resolution of PTA. New learning in functional situations was demonstrated by patients in PTA. Conclusions: It is feasible to begin active rehabilitation focused on functional skills-based learning with patients in the later stages of PTA. Formal assessment of typically observed behaviours during PTA may complement memory-based PTA assessments in determining emergence from PTA.
Resumo:
Government agencies responsible for riparian environments are assessing the combined utility of field survey and remote sensing for mapping and monitoring indicators of riparian zone condition. The objective of this work was to compare the Tropical Rapid Appraisal of Riparian Condition (TRARC) method to a satellite image based approach. TRARC was developed for rapid assessment of the environmental condition of savanna riparian zones. The comparison assessed mapping accuracy, representativeness of TRARC assessment, cost-effectiveness, and suitability for multi-temporal analysis. Two multi-spectral QuickBird images captured in 2004 and 2005 and coincident field data covering sections of the Daly River in the Northern Territory, Australia were used in this work. Both field and image data were processed to map riparian health indicators (RHIs) including percentage canopy cover, organic litter, canopy continuity, stream bank stability, and extent of tree clearing. Spectral vegetation indices, image segmentation and supervised classification were used to produce RHI maps. QuickBird image data were used to examine if the spatial distribution of TRARC transects provided a representative sample of ground based RHI measurements. Results showed that TRARC transects were required to cover at least 3% of the study area to obtain a representative sample. The mapping accuracy and costs of the image based approach were compared to those of the ground based TRARC approach. Results proved that TRARC was more cost-effective at smaller scales (1-100km), while image based assessment becomes more feasible at regional scales (100-1000km). Finally, the ability to use both the image and field based approaches for multi-temporal analysis of RHIs was assessed. Change detection analysis demonstrated that image data can provide detailed information on gradual change, while the TRARC method was only able to identify more gross scale changes. In conclusion, results from both methods were considered to complement each other if used at appropriate spatial scales.
Resumo:
Asthma is a multifactorial disease for which a variety of mouse models have been developed. A major drawback of these models is represented by the transient nature of the airway pathology peaking 24 to 72 hours after challenge and resolving in 1 to 2 weeks. The objective of this study is to characterize the temporal evolution of pulmonary inflammation and remodeling in a recently described mouse model of chronic asthma (8 week treatment with 3 allergens relevant for the human pathology: Dust mite, Ragweed, and Aspergillus; DRA). We studied the DRA model taking advantage of fluorescence molecular tomography (FMT) imaging using near-infrared probes to non-invasively evaluate lung inflammation and airway remodeling. At 4, 6, 8 or 11 weeks, cathepsin- and metalloproteinase-dependent fluorescence was evaluated in vivo. A subgroup of animals, after 4 weeks of DRA, was treated with Budesonide (100 µg/kg intranasally) daily for 4 weeks. Cathepsin-dependent fluorescence in DRA-sensitized mice resulted significantly increased at 6 and 8 weeks, and was markedly inhibited by budesonide. This fluorescent signal well correlated with ex vivo analysis such as bronchoalveolar lavage eosinophils and alveolar cell infiltration. Metalloproteinase-dependent fluorescence was significantly increased at 8 and 11 weeks, nicely correlated with collagen deposition, as evaluated histologically by Masson’s Trichrome staining, and airway epithelium hypertrophy, and was also partly inhibited by budesonide. In conclusion, FMT proved suitable for longitudinal study to evaluate asthma progression, both in terms of inflammatory cell infiltration and airway remodeling, allowing the determination of treatment efficacy in a chronic asthma model in mice.
Resumo:
This work has, as its objective, the development of non-invasive and low-cost systems for monitoring and automatic diagnosing specific neonatal diseases by means of the analysis of suitable video signals. We focus on monitoring infants potentially at risk of diseases characterized by the presence or absence of rhythmic movements of one or more body parts. Seizures and respiratory diseases are specifically considered, but the approach is general. Seizures are defined as sudden neurological and behavioural alterations. They are age-dependent phenomena and the most common sign of central nervous system dysfunction. Neonatal seizures have onset within the 28th day of life in newborns at term and within the 44th week of conceptional age in preterm infants. Their main causes are hypoxic-ischaemic encephalopathy, intracranial haemorrhage, and sepsis. Studies indicate an incidence rate of neonatal seizures of 0.2% live births, 1.1% for preterm neonates, and 1.3% for infants weighing less than 2500 g at birth. Neonatal seizures can be classified into four main categories: clonic, tonic, myoclonic, and subtle. Seizures in newborns have to be promptly and accurately recognized in order to establish timely treatments that could avoid an increase of the underlying brain damage. Respiratory diseases related to the occurrence of apnoea episodes may be caused by cerebrovascular events. Among the wide range of causes of apnoea, besides seizures, a relevant one is Congenital Central Hypoventilation Syndrome (CCHS) \cite{Healy}. With a reported prevalence of 1 in 200,000 live births, CCHS, formerly known as Ondine's curse, is a rare life-threatening disorder characterized by a failure of the automatic control of breathing, caused by mutations in a gene classified as PHOX2B. CCHS manifests itself, in the neonatal period, with episodes of cyanosis or apnoea, especially during quiet sleep. The reported mortality rates range from 8% to 38% of newborn with genetically confirmed CCHS. Nowadays, CCHS is considered a disorder of autonomic regulation, with related risk of sudden infant death syndrome (SIDS). Currently, the standard method of diagnosis, for both diseases, is based on polysomnography, a set of sensors such as ElectroEncephaloGram (EEG) sensors, ElectroMyoGraphy (EMG) sensors, ElectroCardioGraphy (ECG) sensors, elastic belt sensors, pulse-oximeter and nasal flow-meters. This monitoring system is very expensive, time-consuming, moderately invasive and requires particularly skilled medical personnel, not always available in a Neonatal Intensive Care Unit (NICU). Therefore, automatic, real-time and non-invasive monitoring equipments able to reliably recognize these diseases would be of significant value in the NICU. A very appealing monitoring tool to automatically detect neonatal seizures or breathing disorders may be based on acquiring, through a network of sensors, e.g., a set of video cameras, the movements of the newborn's body (e.g., limbs, chest) and properly processing the relevant signals. An automatic multi-sensor system could be used to permanently monitor every patient in the NICU or specific patients at home. Furthermore, a wire-free technique may be more user-friendly and highly desirable when used with infants, in particular with newborns. This work has focused on a reliable method to estimate the periodicity in pathological movements based on the use of the Maximum Likelihood (ML) criterion. In particular, average differential luminance signals from multiple Red, Green and Blue (RGB) cameras or depth-sensor devices are extracted and the presence or absence of a significant periodicity is analysed in order to detect possible pathological conditions. The efficacy of this monitoring system has been measured on the basis of video recordings provided by the Department of Neurosciences of the University of Parma. Concerning clonic seizures, a kinematic analysis was performed to establish a relationship between neonatal seizures and human inborn pattern of quadrupedal locomotion. Moreover, we have decided to realize simulators able to replicate the symptomatic movements characteristic of the diseases under consideration. The reasons is, essentially, the opportunity to have, at any time, a 'subject' on which to test the continuously evolving detection algorithms. Finally, we have developed a smartphone App, called 'Smartphone based contactless epilepsy detector' (SmartCED), able to detect neonatal clonic seizures and warn the user about the occurrence in real-time.
Resumo:
Purpose. To convert objective image analysis of anterior ocular surfaces into recognisable clinical grades, in order to provide a more sensitive and reliable equivalent to current subjective grading methods; a prospective, randomized study correlating clinical grading with digital image assessment. Methods. The possible range of clinical presentations Of bulbar and palpebral hyperaemia, palpebral roughness and corneal staining were represented by 4 sets of 10 images. The images were displayed in random order and graded by 50 clinicians using both subjective CCLRU and Efron grading scales. Previously validated objective image analysis was performed 3 times oil each of the 40 images. Digital measures included edge-detection and relative-coloration components. Step-wise regression analysis determined correlations between the average subjective grade and the objective image analysis measures. Results. Average subjective grades Could be predicted by a combination of the objective image analysis components. These digital ``grades'' accounted for between 69%, (for Efron scale-graded palpebral redness) and 98% (for Efron scale-graded bulbar hyperaemia) of the subjective variance. Conclusions. The results indicate that clinicians may use a combination of vessel areas and overall hue in their judgment of clinical severity for certain conditions. Objective grading call take these aspects into account, and be used to predict an average ``objective grade'' to be used by a clinician in describing the anterior eye. These measures are more sensitive and reliable than subjective grading while still utilizing familiar terminology, and can be applied in research or practice to improve the detection, and monitoring of ocular surface changes.
Resumo:
Objective: To explore views of patients with type 2 diabetes about self monitoring of blood glucose over time. Design: Longitudinal, qualitative study. Setting: Primary and secondary care settings across Lothian, Scotland. Participants: 18 patients with type 2 diabetes. Main outcome measures: Results from repeat in-depth interviews with patients over four years after clinical diagnosis. Results: Analysis revealed three main themes - the role of health professionals, interpreting readings and managing high values, and the ongoing role of blood glucose self monitoring. Self monitoring decreased over time, and health professionals' behaviour seemed crucial in this: participants interpreted doctors' focus on levels of haemoglobin A1c, and lack of perceived interest in meter readings, as indicating that self monitoring was not worth continuing. Some participants saw readings as a proxy measure of good and bad behaviour - with women especially, chastising themselves when readings were high. Some participants continued to find readings difficult to interpret, with uncertainty about how to respond to high readings. Reassurance and habit were key reasons for continuing. There was little indication that participants were using self monitoring to effect and maintain behaviour change. Conclusions: Clinical uncertainty about the efficacy and role of blood glucose self monitoring in patients with type 2 diabetes is mirrored in patients' own accounts. Patients tended not to act on their self monitoring results, in part because of a lack of education about the appropriate response to readings. Health professionals should be explicit about whether and when such patients should self monitor and how they should interpret and act upon the results, especially high readings.
Resumo:
Objective: To assess and explain deviations from recommended practice in National Institute for Clinical Excellence (NICE) guidelines in relation to fetal heart monitoring. Design: Qualitative study. Setting: Large teaching hospital in the UK. Sample: Sixty-six hours of observation of 25 labours and interviews with 20 midwives of varying grades. Methods: Structured observations of labour and semistructured interviews with midwives. Interviews were undertaken using a prompt guide, audiotaped, and transcribed verbatim. Analysis was based on the constant comparative method, assisted by QSR N5 software. Main outcome measures: Deviations from recommended practice in relation to fetal monitoring and insights into why these occur. Results: All babies involved in the study were safely delivered, but 243 deviations from recommended practice in relation to NICE guidelines on fetal monitoring were identified, with the majority (80%) of these occurring in relation to documentation. Other deviations from recommended practice included indications for use of electronic fetal heart monitoring and conduct of fetal heart monitoring. There is evidence of difficulties with availability and maintenance of equipment, and some deficits in staff knowledge and skill. Differing orientations towards fetal monitoring were reported by midwives, which were likely to have impacts on practice. The initiation, management, and interpretation of fetal heart monitoring is complex and distributed across time, space, and professional boundaries, and practices in relation to fetal heart monitoring need to be understood within an organisational and social context. Conclusion: Some deviations from best practice guidelines may be rectified through straightforward interventions including improved systems for managing equipment and training. Other deviations from recommended practice need to be understood as the outcomes of complex processes that are likely to defy easy resolution. © RCOG 2006.
Resumo:
Deep hole drilling is one of the most complicated metal cutting processes and one of the most difficult to perform on CNC machine-tools or machining centres under conditions of limited manpower or unmanned operation. This research work investigates aspects of the deep hole drilling process with small diameter twist drills and presents a prototype system for real time process monitoring and adaptive control; two main research objectives are fulfilled in particular : First objective is the experimental investigation of the mechanics of the deep hole drilling process, using twist drills without internal coolant supply, in the range of diarneters Ø 2.4 to Ø4.5 mm and working length up to 40 diameters. The definition of the problems associated with the low strength of these tools and the study of mechanisms of catastrophic failure which manifest themselves well before and along with the classic mechanism of tool wear. The relationships between drilling thrust and torque with the depth of penetration and the various machining conditions are also investigated and the experimental evidence suggests that the process is inherently unstable at depths beyond a few diameters. Second objective is the design and implementation of a system for intelligent CNC deep hole drilling, the main task of which is to ensure integrity of the process and the safety of the tool and the workpiece. This task is achieved by means of interfacing the CNC system of the machine tool to an external computer which performs the following functions: On-line monitoring of the drilling thrust and torque, adaptive control of feed rate, spindle speed and tool penetration (Z-axis), indirect monitoring of tool wear by pattern recognition of variations of the drilling thrust with cumulative cutting time and drilled depth, operation as a data base for tools and workpieces and finally issuing of alarms and diagnostic messages.
Resumo:
This thesis set out to develop an objective analysis programme that correlates with subjective grades but has improved sensitivity and reliability in its measures so that the possibility of early detection and reliable monitoring of changes in anterior ocular surfaces (bulbar hyperaemia, palpebral redness, palpebral roughness and corneal straining) could be increased. The sensitivity of the program was 20x greater than subjective grading by optometrists. The reliability was found to be optimal (r=1.0) with subjective grading up to 144x more variable (r=0.08). Objective measures were used to create formulae for an overall ‘objective-grade’ (per surface) equivalent to those displayed by the CCLRU or Efron scales. The correlation between the formulated objective verses subjective grades was high, with adjusted r2 up to 0.96. Determination of baseline levels of objective grade were investigated over four age groups (5-85years n= 120) so that in practice a comparison against the ‘normal limits’ could be made. Differences for bulbar hyperaemia were found between the age groups (p<0.001), and also for palpebral redness and roughness (p<0.001). The objective formulae were then applied to the investigation of diurnal variation in order to account for any change that may affect the baseline. Increases in bulbar hyperaemia and palpebral redness were found between examinations in the morning and evening. Correlation factors were recommended. The program was then applied to clinical situations in the form of a contact lens trial and an investigation into iritis and keratoconus where it successfully recognised various surface changes. This programme could become a valuable tool, greatly improving the chances of early detection of anterior ocular abnormalities, and facilitating reliable monitoring of disease progression in clinical as well as research environments.
Resumo:
Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.
Resumo:
Aims: To establish the sensitivity and reliability of objective image analysis in direct comparison with subjective grading of bulbar hyperaemia. Methods: Images of the same eyes were captured with a range of bulbar hyperaemia caused by vasodilation. The progression was recorded and 45 images extracted. The images were objectively analysed on 14 occasions using previously validated edge-detection and colour-extraction techniques. They were also graded by 14 eye-care practitioners (ECPs) and 14 non-clinicians (NCb) using the Efron scale. Six ECPs repeated the grading on three separate occasions Results: Subjective grading was only able to differentiate images with differences in grade of 0.70-1.03 Efron units (sensitivity of 0.30-0.53), compared to 0,02-0.09 Efron units with objective techniques (sensitivity of 0.94-0.99). Significant differences were found between ECPs and individual repeats were also inconsistent (p<0.001). Objective analysis was 16x more reliable than subjective analysis. The NCLs used wider ranges of the scale but were more variable than ECPs, implying that training may have an effect on grading. Conclusions: Objective analysis may offer a new gold standard in anterior ocular examination, and should be developed further as a clinical research tool to allow more highly powered analysis, and to enhance the clinical monitoring of anterior eye disease.
Resumo:
Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.