795 resultados para surveillance and monitoring
Resumo:
Decomposition of domestic wastes in an anaerobic environment results in the production of landfill gas. Public concern about landfill disposal and particularly the production of landfill gas has been heightened over the past decade. This has been due in large to the increased quantities of gas being generated as a result of modern disposal techniques, and also to their increasing effect on modern urban developments. In order to avert diasters, effective means of preventing gas migration are required. This, in turn requires accurate detection and monitoring of gas in the subsurface. Point sampling techniques have many drawbacks, and accurate measurement of gas is difficult. Some of the disadvantages of these techniques could be overcome by assessing the impact of gas on biological systems. This research explores the effects of landfill gas on plants, and hence on the spectral response of vegetation canopies. Examination of the landfill gas/vegetation relationship is covered, both by review of the literature and statistical analysis of field data. The work showed that, although vegetation health was related to landfill gas, it was not possible to define a simple correlation. In the landfill environment, contribution from other variables, such as soil characteristics, frequently confused the relationship. Two sites are investigated in detail, the sites contrasting in terms of the data available, site conditions, and the degree of damage to vegetation. Gas migration at the Panshanger site was dominantly upwards, affecting crops being grown on the landfill cap. The injury was expressed as an overall decline in plant health. Discriminant analysis was used to account for the variations in plant health, and hence the differences in spectral response of the crop canopy, using a combination of soil and gas variables. Damage to both woodland and crops at the Ware site was severe, and could be easily related to the presence of gas. Air photographs, aerial video, and airborne thematic mapper data were used to identify damage to vegetation, and relate this to soil type. The utility of different sensors for this type of application is assessed, and possible improvements that could lead to more widespread use are identified. The situations in which remote sensing data could be combined with ground survey are identified. In addition, a possible methodology for integrating the two approaches is suggested.
Resumo:
The diagnosis and monitoring of ocular disease presents considerable clinical difficulties for two main reasons i) the substantial physiological variation of anatomical structure of the visual pathway and ii) constraints due to technical limitations of diagnostic hardware. These are further confounded by difficulties in detecting early loss or change in visual function due to the masking of disease effects, for example, due to a high degree of redundancy in terms of nerve fibre number along the visual pathway. This thesis addresses these issues across three areas of study: 1. Factors influencing retinal thickness measures and their clinical interpretation As the retina is the principal anatomical site for damage associated with visual loss, objective measures of retinal thickness and retinal nerve fibre layer thickness are key to the detection of pathology. In this thesis the ability of optical coherence tomography (OCT) to provide repeatable and reproducible measures of retinal structure at the macula and optic nerve head is investigated. In addition, the normal physiological variations in retinal thickness and retinal nerve fibre layer thickness are explored. Principal findings were: • Macular retinal thickness and optic nerve head measurements are repeatable and reproducible for normal subjects and diseased eyes • Macular and retinal nerve fibre layer thickness around the optic nerve correlate negatively with axial length, suggesting that larger eyes have thinner retinae, potentially making them more susceptible to damage or disease • Foveola retinal thickness increases with age while retinal nerve fibre layer thickness around the optic nerve head decreases with age. Such findings should be considered during examination of the eye with suspect pathology or in long-term disease monitoring 2. Impact of glucose control on retinal anatomy and function in diabetes Diabetes is a major health concern in the UK and worldwide and diabetic retinopathy is a major cause of blindness in the working population. Objective, quantitative measurements of retinal thickness. particularly at the macula provide essential information regarding disease progression and the efficacy of treatment. Functional vision loss in diabetic patients is commonly observed in clinical and experimental studies and is thought to be affected by blood glucose levels. In the first study of its kind, the short term impact of fluctuations in blood glucose levels on retinal structure and function over a 12 hour period in patients with diabetes are investigated. Principal findings were: • Acute fluctuations in blood glucose levels are greater in diabetic patients than normal subjects • The fluctuations in blood glucose levels impact contrast sensitivity scores. SWAP visual fields, intraocular pressure and diastolic pressure. This effect is similar for type 1 and type 2 diabetic patients despite the differences in their physiological status. • Long-term metabolic control in the diabetic patient is a useful predictor in the fluctuation of contrast sensitivity scores. • Large fluctuations in blood glucose levels and/or visual function and structure may be indicative of an increased risk of development or progression of retinopathy 3. Structural and functional damage of the visual pathway in glaucomatous optic neuropathy The glaucomatous eye undergoes a number of well documented pathological changes including retinal nerve fibre loss and optic nerve head damage which is correlated with loss of functional vision. In experimental glaucoma there is evidence that glaucomatous damage extends from retinal ganglion cells in the eye, along the visual pathway, to vision centres in the brain. This thesis explores the effects of glaucoma on retinal nerve fibre layer thickness, ocular anterior anatomy and cortical structure, and its correlates with visual function in humans. Principal findings were: • In the retina, glaucomatous retinal nerve fibre layer loss is less marked with increasing distance from the optic nerve head, suggesting that RNFL examination at a greater distance than traditionally employed may provide invaluable early indicators of glaucomatous damage • Neuroretinal rim area and retrobulbar optic nerve diameter are strong indicators of visual field loss • Grey matter density decreases at a rate of 3.85% per decade. There was no clear evidence of a disease effect • Cortical activation as measured by fMRI was a strong indicator of functional damage in patients with significant neuroretinal rim loss despite relatively modest visual field defects These investigations have shown that the effects of senescence are evident in both the anterior and posterior visual pathway. A variety of anatomical and functional diagnostic protocols for the investigation of damage to the visual pathway in ocular disease are required to maximise understanding of the disease processes and thereby optimising patient care.
Resumo:
This is an article about Sarah’s sexual teenage journey, seen through the lens of her mother, the author. It tackles learning disability, sexual experimentation, education, governance and responsibility. By using an autoethnographical method the article speaks personally to these intimate lived experiences and yet broadly and contextually these issues can give further insight into the difficult social processes that permeate surveillance and control, of sexual activity amongst a particular group of adults (young, learning disabled), by way of legal practice and sex education; family practices and the negotiation of power and control over sexual activity; and sexual citizenship and rights to a sexual identity.
Resumo:
This paper draws upon activity theory- to analyse an empirical investigation of the micro practices of strategy in three UK universities. Activity theory provides a framework of four interactive components from which strategy emerges; the collective structures of the organization, the primary actors, in this research conceptualized as the top management team (TMT), the practical activities in which they interact and the strategic practices through which interaction is conducted. Using this framework, the paper focuses specifically on the formal strategic practices involved in direction setting, resource allocation, and monitoring and control. These strategic practices arc associated with continuity of strategic activity in one case study but are involved in the reinterpretation and change of strategic activity in the other two cases. We model this finding into activity theory-based typologies of the cases that illustrate the way that practices either distribute shared interpretations or mediate between contested interpretations of strategic activity. The typologies explain the relationships between strategic practices and continuity and change of strategy as practice. The paper concludes by linking activity theory to wider change literatures to illustrate its potential as an integrative methodological framework for examining the subjective and emergent processes through which strategic activity is constructed. © Blackwell Publishing Ltd 2003.
Resumo:
Two alternative work designs are identified for operators of stand-alone advanced manufacturing technology (AMT). In the case of specialist control, operators are limited to running and monitoring the technology, with operating problems handled by specialists, such as engineers. In the case of operator control, operators are given much broader responsibilities and deal directly with the majority of operating problems encountered. The hypothesis that operator control would promote better performance and psychological well-being than would specialist control (which is more prevalent) was tested in a longitudinal field study involving work redesign for operators of computer-controlled assembly machines. Change from specialist to operator control reduced downtime, especially for high-variance systems, and was associated with greater intrinsic job satisfaction and less perceived work pressure. The implications of these findings for both small and large-scale applications of AMT are discussed.
Resumo:
Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.
Resumo:
Previous research suggests that many eating behaviours are stable in children but that obesigenic eating behaviours tend to increase with age. This research explores the stability (consistency in individual levels over time) and continuity (consistency in group levels over time) of child eating behaviours and parental feeding practices in children between 2 and 5 years of age. Thirty one participants completed measures of child eating behaviours, parental feeding practices and child weight at 2 and 5 years of age. Child eating behaviours and parental feeding practices remained stable between 2 and 5 years of age. There was also good continuity in measures of parental restriction and monitoring of food intake, as well as in mean levels of children's eating behaviours and BMI over time. Mean levels of maternal pressure to eat significantly increased, whilst mean levels of desire to drink significantly decreased, between 2 and 5 years of age. These findings suggest that children's eating behaviours are stable and continuous in the period prior to 5 years of age. Further research is necessary to replicate these findings and to explore why later developmental increases are seen in children's obesigenic eating behaviours. © 2011 Elsevier Ltd.
Resumo:
Question/Issue: We combine agency and institutional theory to explain the division of equity shares between the foreign (majority) and local (minority) partners within foreign affiliates. We posit that once the decision to invest is made, the ownership structure is arranged so as to generate appropriate incentives to local partners, taking into account both the institutional environment and the firm-specific difficulty in monitoring. Research Findings/Insights: Using a large firm-level dataset for the period 2003-2011 from 16 Central and Eastern European countries and applying selectivity corrected estimates, we find that both weaker host country institutions and higher share of intangible assets in total assets in the firm imply higher minority equity share of local partners. The findings hold when controlling for host country effects and when the attributes of the institutional environment are instrumented. Theoretical/Academic Implications: The classic view is that weak institutions lead to concentrated ownership, yet it leaves the level of minority equity shares unexplained. Our contribution uses a firm-level perspective combined with national-level variation in the institutional environment, and applies agency theory to explain the minority local partner share in foreign affiliates. In particular, we posit that the information asymmetry and monitoring problem in firms are exacerbated by weak host country institutions, but also by the higher share of intangible assets in total assets. Practitioner/Policy Implications: Assessing investment opportunities abroad, foreign firms need to pay attention not only to features directly related to corporate governance (e.g., bankruptcy codes) but also to the broad institutional environment. In weak institutional environments, foreign parent firms need to create strong incentives for local partners by offering them significant minority shares in equity. The same recommendation applies to firms with higher shares of intangible assets in total assets. © 2014 The Authors.
Resumo:
Surgical site infections (SSI) are a prevalent health care-associated infection (HAl). Prior to the mid-19th century, surgical sites commonly developed postoperative wound complications. It was in the 1860's, after Joseph Lister introduced carbolic acid and the principles of antisepsis that postoperative wound infection significantly decreased. Today, patient preoperative skin preparation with an antiseptic agent prior to surgery is a standard of practice. Povidone-iodine and chlorhexidine gluconate are currently the most commonly used antimicrobial agents used to prep the patient's skin. In this current study, the epidemiology, diagnosis, surveillance and prevention of SSI with chlorhexidine were investigated. The antimicrobial activity of chlorhexidine was assessed. In in-vitro and in-vivo studies the antimicrobial efficacy of 2% (w/v) chlorhexidine gluconate (CHG) in 70% isopropyl alcohol (IPA) and 10% povidoneiodine (PVP-I) in the presence of 0.9% normal saline or blood were examined. The 2% CHG in 70% IPA solutions antimicrobial activity was not diminished in the presence of 0.9% normal saline or blood. In comparison, the traditional patient preoperative skin preparation, 10% PVP-I antimicrobial activity was not diminished in the presence of 0.9% normal saline, but was diminished in the presence of blood. In an in-vivo human volunteer study the potential for reduction of the antimicrobial efficacy of aqueous patient preoperative skin preparations compromised by mechanical removal of wet product from the application site (blot) was assessed. In this evaluation, 2% CHG and 10% povidone-iodine (PVP-I) were blotted from the patient's skin after application to the test site. The blotting, or mechanical removal, of the wet antiseptic from the application site did not produce a significant difference in product efficacy. In a clinical trial to compare 2% CHG in 70% IPA and PVP-! scrub and paint patient preoperative skin preparation for the prevention of SSI, there were 849 patients randomly assigned to the study groups (409 in the chlorhexidine-alcohol and 440 in the povidone-iodine group) in the intention-to-treat analysis. The overall surgical site infection was significantly lower in the 2% CHG in 70% IPA group than in the PVP-I group (9.5% versus 16.1 %, p=0.004; relative risk, 0.59 with 95% confidence interval of 0.41 to 0.85). Preoperative cleansing of the patient's skin with chlorhexidine-alcohol is superior to povidone-iodine in preventing surgical site infection after clean-contaminated surgery.
Resumo:
Energy dissipation and fatigue properties of nano-layered thin films are less well studied than bulk properties. Existing experimental methods for studying energy dissipation properties, typically using magnetic interaction as a driving force at different frequencies and a laser-based deformation measurement system, are difficult to apply to two-dimensional materials. We propose a novel experimental method to perform dynamic testing on thin-film materials by driving a cantilever specimen at its fixed end with a bimorph piezoelectric actuator and monitoring the displacements of the specimen and the actuator with a fibre-optic system. Upon vibration, the specimen is greatly affected by its inertia, and behaves as a cantilever beam under base excitation in translation. At resonance, this method resembles the vibrating reed method conventionally used in the viscoelasticity community. The loss tangent is obtained from both the width of a resonance peak and a free-decay process. As for fatigue measurement, we implement a control algorithm into LabView to maintain maximum displacement of the specimen during the course of the experiment. The fatigue S-N curves are obtained.
Resumo:
Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Bladder cancer is among the most common cancers worldwide (4th in men). It is responsible for high patient morbidity and displays rapid recurrence and progression. Lack of sensitivity of gold standard techniques (white light cystoscopy, voided urine cytology) means many early treatable cases are missed. The result is a large number of advanced cases of bladder cancer which require extensive treatment and monitoring. For this reason, bladder cancer is the single most expensive cancer to treat on a per patient basis. In recent years, autofluorescence spectroscopy has begun to shed light into disease research. Of particular interest in cancer research are the fluorescent metabolic cofactors NADH and FAD. Early in tumour development, cancer cells often undergo a metabolic shift (the Warburg effect) resulting in increased NADH. The ratio of NADH to FAD ("redox ratio") can therefore be used as an indicator of the metabolic status of cells. Redox ratio measurements have been used to differentiate between healthy and cancer breast cells and to monitor cellular responses to therapies. Here, we have demonstrated, using healthy and bladder cancer cell lines, a statistically significant difference in the redox ratio of bladder cancer cells, indicative of a metabolic shift. To do this we customised a standard flow cytometer to excite and record fluorescence specifically from NADH and FAD, along with a method for automatically calculating the redox ratio of individual cells within large populations. These results could inform the design of novel probes and screening systems for the early detection of bladder cancer.
Resumo:
This paper deals with communicational breakdowns and misunderstandings in computer mediated communication (CMC) and ways to recover from them or to prevent them. The paper describes a case study of CMC conducted in a company named Artigiani. We observed communication and conducted content analysis of e-mail messages, focusing on message exchanges between customer service representatives (CSRs) and their contacts. In addition to task management difficulties, we identified communication breakdowns that result from differences between perspectives, and from the lack of contextual information, mainly technical background and professional jargon at the customers’ side. We examined possible ways to enhance CMC and accordingly designed a prototype for an e-mail user interface that emphasizes a communicational strategy called contextualization as a central component for obtaining effective communication and for supporting effective management and control of organizational activities, especially handling orders, price quoting, and monitoring the supply and installation of products.
Resumo:
Due to dynamic variability, identifying the specific conditions under which non-functional requirements (NFRs) are satisfied may be only possible at runtime. Therefore, it is necessary to consider the dynamic treatment of relevant information during the requirements specifications. The associated data can be gathered by monitoring the execution of the application and its underlying environment to support reasoning about how the current application configuration is fulfilling the established requirements. This paper presents a dynamic decision-making infrastructure to support both NFRs representation and monitoring, and to reason about the degree of satisfaction of NFRs during runtime. The infrastructure is composed of: (i) an extended feature model aligned with a domain-specific language for representing NFRs to be monitored at runtime; (ii) a monitoring infrastructure to continuously assess NFRs at runtime; and (iii) a exible decision-making process to select the best available configuration based on the satisfaction degree of the NRFs. The evaluation of the approach has shown that it is able to choose application configurations that well fit user NFRs based on runtime information. The evaluation also revealed that the proposed infrastructure provided consistent indicators regarding the best application configurations that fit user NFRs. Finally, a benefit of our approach is that it allows us to quantify the level of satisfaction with respect to NFRs specification.
Resumo:
For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, “wearable,” sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that “learn” from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.