5 resultados para Fully automated

em Aston University Research Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis begins by providing a review of techniques for interpreting the thermal response at the earth's surface acquired using remote sensing technology. Historic limitations in the precision with which imagery acquired from airborne platforms can be geometrically corrected and co-registered has meant that relatively little work has been carried out examining the diurnal variation of surface temperature over wide regions. Although emerging remote sensing systems provide the potential to register temporal image data within satisfactory levels of accuracy, this technology is still not widely available and does not address the issue of historic data sets which cannot be rectified using conventional parametric approaches. In overcoming these problems, the second part of this thesis describes the development of an alternative approach for rectifying airborne line-scanned imagery. The underlying assumption that scan lines within the imagery are straight greatly reduces the number of ground control points required to describe the image geometry. Furthermore, the use of pattern matching procedures to identify geometric disparities between raw line-scanned imagery and corresponding aerial photography enables the correction procedure to be almost fully automated. By reconstructing the raw image data on a truly line-by-line basis, it is possible to register the airborne line-scanned imagery to the aerial photography with an average accuracy of better than one pixel. Providing corresponding aerial photography is available, this approach can be applied in the absence of platform altitude information allowing multi-temporal data sets to be corrected and registered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Component-based development (CBD) has become an important emerging topic in the software engineering field. It promises long-sought-after benefits such as increased software reuse, reduced development time to market and, hence, reduced software production cost. Despite the huge potential, the lack of reasoning support and development environment of component modeling and verification may hinder its development. Methods and tools that can support component model analysis are highly appreciated by industry. Such a tool support should be fully automated as well as efficient. At the same time, the reasoning tool should scale up well as it may need to handle hundreds or even thousands of components that a modern software system may have. Furthermore, a distributed environment that can effectively manage and compose components is also desirable. In this paper, we present an approach to the modeling and verification of a newly proposed component model using Semantic Web languages and their reasoning tools. We use the Web Ontology Language and the Semantic Web Rule Language to precisely capture the inter-relationships and constraints among the entities in a component model. Semantic Web reasoning tools are deployed to perform automated analysis support of the component models. Moreover, we also proposed a service-oriented architecture (SOA)-based semantic web environment for CBD. The adoption of Semantic Web services and SOA make our component environment more reusable, scalable, dynamic and adaptive.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this work is to establish the application of a fully automated microfluidic chip based protein separation assay in tear analysis. It is rapid, requires small sample volumes and is vastly superior to, and more convenient than, comparable conventional gel electrophoresis assays. The protein sizing chip technology was applied to three specific fields of analysis. Firstly tear samples were collected regularly from subjects establishing the baseline effects of tear stimulation, tear state and patient health. Secondly tear samples were taken from lens wearing eyes and thirdly the use of microfluidic technology was assessed as a means to investigate a novel area of tear analysis, which we have termed the 'tear envelope'. Utilising the Agilent 2100 Bioanalyzer in combination with the Protein 200 Plus LabChip kit, these studies investigated tear proteins in the range of 14-200 kDa. Particular attention was paid to the relative concentrations of lysozyme, tear lipocalin, secretory IgA (sIgA), IgG and lactoferrin, together with the overall tear electropherogram 'fingerprint'. Furthermore, whilst lens-tear interaction studies are generally thought of as an investigation into the effects of tears components on the contact lens material, i.e. deposition studies, this report addresses the reverse phenomenon-the effect of the lens, and particularly the newly inserted lens, on the tear fluid composition and dynamics. The use of microfluidic technology provides a significant advance in tear studies and should prove invaluable in tear diagnostics and contact lens performance analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To determine the best photographic surrogate markers for detecting sight-threatening macular oedema (MO) in people with diabetes attending UK national screening programmes. Design: A multicentre, prospective, observational cohort study of 3170 patients with photographic signs of diabetic retinopathy visible within the macular region [exudates within two disc diameters, microaneurysms/dot haemorrhages (M/DHs) and blot haemorrhages (BHs)] who were recruited from seven study centres. Setting: All patients were recruited and imaged at one of seven study centres in Aberdeen, Birmingham, Dundee, Dunfermline, Edinburgh, Liverpool and Oxford. Participants: Subjects with features of diabetic retinopathy visible within the macular region attending one of seven diabetic retinal screening programmes. Interventions: Alternative referral criteria for suspected MO based on photographic surrogate markers; an optical coherence tomographic examination in addition to the standard digital retinal photograph. Main outcome measures: (1) To determine the best method to detect sight-threatening MO in people with diabetes using photographic surrogate markers. (2) Sensitivity and specificity estimates to assess the costs and consequences of using alternative strategies. (3) Modelled long-term costs and quality-adjusted life-years (QALYs). Results: Prevalence of MO was strongly related to the presence of lesions and was roughly five times higher in subjects with exudates or BHs or more than two M/DHs within one disc diameter. Having worse visual acuity was associated with about a fivefold higher prevalence of MO. Current manual screening grading schemes that ignore visual acuity or the presence of M/DHs could be improved by taking these into account. Health service costs increase substantially with more sensitive/less specific strategies. A fully automated strategy, using the automated detection of patterns of photographic surrogate markers, is superior to all current manual grading schemes for detecting MO in people with diabetes. The addition of optical coherence tomography (OCT) to each strategy, prior to referral, results in a reduction in costs to the health service with no decrement in the number of MO cases detected. Conclusions: Compared with all current manual grading schemes, for the same sensitivity, a fully automated strategy, using the automated detection of patterns of photographic surrogate markers, achieves a higher specificity for detecting MO in people with diabetes, especially if visual acuity is included in the automated strategy. Overall, costs to the health service are likely to increase if more sensitive referral strategies are adopted over more specific screening strategies for MO, for only very small gains in QALYs. The addition of OCT to each screening strategy, prior to referral, results in a reduction in costs to the health service with no decrement in the number of MO cases detected. © Queen's Printer and Controller of HMSO 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.