46 resultados para Retinal image quality
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost.
Resumo:
Objectives: To improve the integration of MRI with radiotherapy treatment planning, our department fabricated a flat couch top for our MR scanner. Setting up using this couch top meant that the patients were physically higher up in the scanner and, posteriorly, a gap was introduced between the patient and radiofrequency coil.
Resumo:
This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.
Resumo:
In this paper, we propose a system level design approach considering voltage over-scaling (VOS) that achieves error resiliency using unequal error protection of different computation elements, while incurring minor quality degradation. Depending on user specifications and severity of process variations/channel noise, the degree of VOS in each block of the system is adaptively tuned to ensure minimum system power while providing "just-the-right" amount of quality and robustness. This is achieved, by taking into consideration system level interactions and ensuring that under any change of operating conditions only the "lesscrucial" computations, that contribute less to block/system output quality, are affected. The design methodology applied to a DCT/IDCT system shows large power benefits (up to 69%) at reasonable image quality while tolerating errors induced by varying operating conditions (VOS, process variations, channel noise). Interestingly, the proposed IDCT scheme conceals channel noise at scaled voltages. ©2009 IEEE.
Resumo:
A search for a submerged jet ski and the lost limb of its driver involved in a collision with a speedboat was made in a shallow lake in Northern Ireland. The location of both was crucial to establishing events at the time of the accident. Local intelligence suggested both objects were likely to be partially-buried by lacustrine silt. To avoid sediment churning, this required non-invasive, completely non-destructive assessment and mapping of the scene. A MALA RAMAC ground-penetrating radar system (GPR) mounted on floats for surveying from walkways and jetties or placed in a small rubber dinghy for offshore profiling was used. A grid was established and each line surveyed with 100, 200 and 400MHz antennae. In waters over 6m deep GPR data showed the form of the lake floor but excessive ringing occurred in the data. In waters less than 6m deep ringing diminished on both 100 and 200MHz data, the latter displaying the best trade-off between depth penetration and horizontal object resolution. 400MHz data failed to be of use in waters over 2m deep and at these depths showed only limited improvement of image quality compared to 200MHz data. Surface objects such as a wooden walkway caused interference on 200 and 400MHz data when antennae were oriented both normal and parallel to survey direction; this may be a function of the low attenuation of radar waves in freshwater, allowing excellent lateral and vertical radar wave penetration. On 200MHz data the damaged jet-ski was clearly imaged in a location that contradicted the speedboat driver's account of the accident.
Resumo:
Images of the site of the Type Ic supernova (SN) 2002ap taken before explosion were analysed previously by Smartt et al. We have uncovered new unpublished, archival pre-explosion images from the Canada-France-Hawaii Telescope (CFHT) that are vastly superior in depth and image quality. In this paper we present a further search for the progenitor star of this unusual Type Ic SN. Aligning high-resolution Hubble Space Telescope observations of the SN itself with the archival CFHT images allowed us to pinpoint the location of the progenitor site on the groundbased observations. We find that a source visible in the B- and R-band pre-explosion images close to the position of the SN is (1) not coincident with the SN position within the uncertainties of our relative astrometry and (2) is still visible similar to 4.7-yr post-explosion in late-time observations taken with the William Herschel Telescope. We therefore conclude that it is not the progenitor of SN 2002ap. We derived absolute limiting magnitudes for the progenitor of M-B >= -4.2 +/- 0.5 and M-R >= -5.1 +/- 0.5. These are the deepest limits yet placed on a Type Ic SN progenitor. We rule out all massive stars with initial masses greater than 7-8 M-circle dot (the lower mass limit for stars to undergo core collapse) that have not evolved to become Wolf-Rayet stars. This is consistent with the prediction that Type Ic SNe should result from the explosions of Wolf-Rayet stars. Comparing our luminosity limits with stellar models of single stars at appropriate metallicity (Z = 0.008) and with standard mass-loss rates, we find no model that produces a Wolf-Rayet star of low enough mass and luminosity to be classed as a viable progenitor. Models with twice the standard mass-loss rates provide possible single star progenitors but all are initially more massive than 30-40 M-circle dot. We conclude that any single star progenitor must have experienced at least twice the standard mass-loss rates, been initially more massive than 30-40 M-circle dot and exploded as a Wolf-Rayet star of final mass 10-12 M-circle dot. Alternatively a progenitor star of lower initial mass may have evolved in an interacting binary system. Mazzali et al. propose such a binary scenario for the progenitor of SN 2002ap in which a star of initial mass 15-20 M-circle dot is stripped by its binary companion, becoming a 5 M-circle dot Wolf-Rayet star prior to explosion. We constrain any possible binary companion to a main-sequence star of
Resumo:
The objective of this multicentre study was to undertake a systematic comparison of face-to-face consultations and teleconsultations performed using low-cost videoconferencing equipment. One hundred and twenty-six patients were enrolled by their general practitioners across three sites. Each patient underwent a teleconsultation with a distant dermatologist followed by a traditional face-to-face consultation with a dermatologist. The main outcome measures were diagnostic concordance rates, management plans and patient and doctor satisfaction. One hundred and fifty-five diagnoses were identified by the face-to-face consultations from the sample of 126 patients. Identical diagnoses were recorded from both types of consultation in 59% of cases. Teledermatology consultations missed a secondary diagnosis in 6% of cases and were unable to make a useful diagnosis in 11% of cases. Wrong diagnoses were made by the teledermatologist in 4% of cases. Dermatologists were able to make a definitive diagnosis by face-to-face consultations in significantly more cases than by teleconsultations (P = 0.001). Where both types of consultation resulted in a single diagnosis there was a high level of agreement (kappa = 0.96, lower 95% confidence limit 0.91-1.00). Overall follow-up rates from both types of consultation were almost identical. Fifty per cent of patients seen could have been managed using a single videoconferenced teleconsultation without any requirement for further specialist intervention. Patients reported high levels of satisfaction with the teleconsultations. General practitioners reported that 75% of the teleconsultations were of educational benefit. This study illustrates the potential of telemedicine to diagnose and manage dermatology cases referred from primary care. Once the problem of image quality has been addressed, further studies will be required to investigate the cost-effectiveness of a teledermatology service and the potential consequences for the provision of dermatological services in the U.K.
Resumo:
Ambisonics is spatial audio technique that attempts to recreate a physical sound field over as large an area as possible. Higher Order Ambisonic systems modelled with near field loudspeakers in free field as well as in a simulated room are investigated. The influence of reflections on the image quality is analysed objectively for both a studio-sized and large reproduction environment using the relative intensity of the reproduced sound field. The results of a simulated enclosed HOA system in the studio-sized room are compared to sound field measurements in the reproduced area.
Resumo:
The aim of this paper is to demonstrate the applicability and the effectiveness of a computationally demanding stereo matching algorithm in different lowcost and low-complexity embedded devices, by focusing on the analysis of timing and image quality performances. Various optimizations have been implemented to allow its deployment on specific hardware architectures while decreasing memory and processing time requirements: (1) reduction of color channel information and resolution for input images, (2) low-level software optimizations such as parallel computation, replacement of function calls or loop unrolling, (3) reduction of redundant data structures and internal data representation. The feasibility of a stereovision system on a low cost platform is evaluated by using standard datasets and images taken from Infra-Red (IR) cameras. Analysis of the resulting disparity map accuracy with respect to a full-size dataset is performed as well as the testing of suboptimal solutions
Resumo:
In this paper, we present a novel discrete cosine transform (DCT) architecture that allows aggressive voltage scaling for low-power dissipation, even under process parameter variations with minimal overhead as opposed to existing techniques. Under a scaled supply voltage and/or variations in process parameters, any possible delay errors appear only from the long paths that are designed to be less contributive to output quality. The proposed architecture allows a graceful degradation in the peak SNR (PSNR) under aggressive voltage scaling as well as extreme process variations. Results show that even under large process variations (±3σ around mean threshold voltage) and aggressive supply voltage scaling (at 0.88 V, while the nominal voltage is 1.2 V for a 90-nm technology), there is a gradual degradation of image quality with considerable power savings (71% at PSNR of 23.4 dB) for the proposed architecture, when compared to existing implementations in a 90-nm process technology. © 2006 IEEE.
Resumo:
In this paper, we present a unified approach to an energy-efficient variation-tolerant design of Discrete Wavelet Transform (DWT) in the context of image processing applications. It is to be noted that it is not necessary to produce exactly correct numerical outputs in most image processing applications. We exploit this important feature and propose a design methodology for DWT which shows energy quality tradeoffs at each level of design hierarchy starting from the algorithm level down to the architecture and circuit levels by taking advantage of the limited perceptual ability of the Human Visual System. A unique feature of this design methodology is that it guarantees robustness under process variability and facilitates aggressive voltage over-scaling. Simulation results show significant energy savings (74% - 83%) with minor degradations in output image quality and avert catastrophic failures under process variations compared to a conventional design. © 2010 IEEE.
Resumo:
2-D Discrete Cosine Transform (DCT) is widely used as the core of digital image and video compression. In this paper, we present a novel DCT architecture that allows aggressive voltage scaling by exploiting the fact that not all intermediate computations are equally important in a DCT system to obtain "good" image quality with Peak Signal to Noise Ratio(PSNR) > 30 dB. This observation has led us to propose a DCT architecture where the signal paths that are less contributive to PSNR improvement are designed to be longer than the paths that are more contributive to PSNR improvement. It should also be noted that robustness with respect to parameter variations and low power operation typically impose contradictory requirements in terms of architecture design. However, the proposed architecture lends itself to aggressive voltage scaling for low-power dissipation even under process parameter variations. Under a scaled supply voltage and/or variations in process parameters, any possible delay errors would only appear from the long paths that are less contributive towards PSNR improvement, providing large improvement in power dissipation with small PSNR degradation. Results show that even under large process variation and supply voltage scaling (0.8V), there is a gradual degradation of image quality with considerable power savings (62.8%) for the proposed architecture when compared to existing implementations in 70 nm process technology.