921 resultados para Objective Image Quality
Resumo:
The tear film, cornea and lens dictate the refractive power of the eye and the retinal image quality is principally defined by diffraction, whole eye wavefront error, scatter, and chromatic aberration. Diffraction and wave aberration are fundamentally pupil diameter dependent; however scatter can be induced by refractive surgery and in the normal ageing eye becomes an increasingly important factor defining retinal image quality. The component of visual quality most affected by the tear film, refractive surgery and multifocal contact and intraocular lenses is the wave aberration of the eye. This body of work demonstrates the effects of each of these anomalies on the visual quality of the eye. When assessing normal or borderline self-diagnosed dry eye subjects using aberrometry, combining lubricating eye drops and spray does not offer any benefit over individual products. However, subjects perceive a difference in comfort for all interventions after one hour. Total higher order aberrations increase after laser assisted sub-epithelial keratectomy performed using a solid-state laser on myopes, but this causes no significant decrease in contrast sensitivity or increase in glare disability. Mean sensitivity and reliability indices for perimetry were comparable to pre-surgery results. Multifocal contact lenses and intraocular lenses are designed to maximise vision when the patient is binocular, so any evaluation of the eyes individually is confounded by reduced individual visual acuity and visual quality. Different designs of aspheric multifocal contact lenses do not provide the same level of visual quality. Multifocal contact lenses adversely affect mean deviation values for perimetry and this should be considered when screening individuals with multifocal contact or intraocular lenses. Photographic image quality obtained through a multifocal contact or intraocular lens appears to be unchanged. Future work should evaluate the effect of these anomalies in combination; with the aim of providing the best visual quality possible and supplying normative data for screening purposes.
Resumo:
JPEG2000 is a new coming image standard. In this paper we analyze the performance of error resilience tools in JPEG2000, and present an analytical model to estimate the quality of JPEG2000 encoded image transmitted over wireless channels. The effectiveness of the analytical model is validated by simulation results. Furthermore, analytical model is utilized by the base station to design efficient unequally error protection schemes for JPEG2000 transmission. In the design, a utility function is denned to make a tradeoff between the image quality and the cost for transmitting the image over wireless channel. © 2002 IEEE.
Resumo:
Purpose: To assess the validity and repeatability of objective compared to subjective contact lens fit analysis. Methods: Thirty-five subjects (aged 22.0. ±. 3.0 years) wore two different soft contact lens designs. Four lens fit variables: centration, horizontal lag, post-blink movement in up-gaze and push-up recovery speed were assessed subjectively (four observers) and objectively from slit-lamp biomicroscopy captured images and video. The analysis was repeated a week later. Results: The average of the four experienced observers was compared to objective measures, but centration, movement on blink, lag and push-up recovery speed all varied significantly between them (p <. 0.001). Horizontal lens centration was on average close to central as assessed both objectively and subjectively (p > 0.05). The 95% confidence interval of subjective repeatability was better than objective assessment (±0.128. mm versus ±0.168. mm, p = 0.417), but utilised only 78% of the objective range. Vertical centration assessed objectively showed a slight inferior decentration (0.371. ±. 0.381. mm) with good inter- and intrasession repeatability (p > 0.05). Movement-on-blink was lower estimated subjectively than measured objectively (0.269. ±. 0.179. mm versus 0.352. ±. 0.355. mm; p = 0.035), but had better repeatability (±0.124. mm versus ±0.314. mm 95% confidence interval) unless correcting for the smaller range (47%). Horizontal lag was lower estimated subjectively (0.562. ±. 0.259. mm) than measured objectively (0.708. ±. 0.374. mm, p <. 0.001), had poorer repeatability (±0.132. mm versus ±0.089. mm 95% confidence interval) and had a smaller range (63%). Subjective categorisation of push-up speed of recovery showed reasonable differentiation relative to objective measurement (p <. 0.001). Conclusions: The objective image analysis allows an accurate, reliable and repeatable assessment of soft contact lens fit characteristics, being a useful tool for research and optimisation of lens fit in clinical practice.
Resumo:
When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.
Resumo:
Our modular approach to data hiding is an innovative concept in the data hiding research field. It enables the creation of modular digital watermarking methods that have extendable features and are designed for use in web applications. The methods consist of two types of modules – a basic module and an application-specific module. The basic module mainly provides features which are connected with the specific image format. As JPEG is a preferred image format on the Internet, we have put a focus on the achievement of a robust and error-free embedding and retrieval of the embedded data in JPEG images. The application-specific modules are adaptable to user requirements in the concrete web application. The experimental results of the modular data watermarking are very promising. They indicate excellent image quality, satisfactory size of the embedded data and perfect robustness against JPEG transformations with prespecified compression ratios. ACM Computing Classification System (1998): C.2.0.
Resumo:
Differential evolution is an optimisation technique that has been successfully employed in various applications. In this paper, we apply differential evolution to the problem of extracting the optimal colours of a colour map for quantised images. The choice of entries in the colour map is crucial for the resulting image quality as it forms a look-up table that is used for all pixels in the image. We show that differential evolution can be effectively employed as a method for deriving the entries in the map. In order to optimise the image quality, our differential evolution approach is combined with a local search method that is guaranteed to find the local optimal colour map. This hybrid approach is shown to outperform various commonly used colour quantisation algorithms on a set of standard images. Copyright © 2010 Inderscience Enterprises Ltd.
Resumo:
Purpose: To determine whether the ‘through-focus’ aberrations of a multifocal and accommodative intraocular lens (IOL) implanted patient can be used to provide rapid and reliable measures of their subjective range of clear vision. Methods: Eyes that had been implanted with a concentric (n = 8), segmented (n = 10) or accommodating (n = 6) intraocular lenses (mean age 62.9 ± 8.9 years; range 46-79 years) for over a year underwent simultaneous monocular subjective (electronic logMAR test chart at 4m with letters randomised between presentations) and objective (Aston open-field aberrometer) defocus curve testing for levels of defocus between +1.50 to -5.00DS in -0.50DS steps, in a randomised order. Pupil size and ocular aberration (a combination of the patient’s and the defocus inducing lens aberrations) at each level of blur was measured by the aberrometer. Visual acuity was measured subjectively at each level of defocus to determine the traditional defocus curve. Objective acuity was predicted using image quality metrics. Results: The range of clear focus differed between the three IOL types (F=15.506, P=0.001) as well as between subjective and objective defocus curves (F=6.685, p=0.049). There was no statistically significant difference between subjective and objective defocus curves in the segmented or concentric ring MIOL group (P>0.05). However a difference was found between the two measures and the accommodating IOL group (P<0.001). Mean Delta logMAR (predicted minus measured logMAR) across all target vergences was -0.06 ± 0.19 logMAR. Predicted logMAR defocus curves for the multifocal IOLs did not show a near vision addition peak, unlike the subjective measurement of visual acuity. However, there was a strong positive correlation between measured and predicted logMAR for all three IOLs (Pearson’s correlation: P<0.001). Conclusions: Current subjective procedures are lengthy and do not enable important additional measures such as defocus curves under differently luminance or contrast levels to be assessed, which may limit our understanding of MIOL performance in real-world conditions. In general objective aberrometry measures correlated well with the subjective assessment indicating the relative robustness of this technique in evaluating post-operative success with segmented and concentric ring MIOL.
Resumo:
Objective. To critically evaluate the current literature in an effort to establish the current role of radiologic imaging, advances in computed tomography (CT) and standard film radiography in the diagnosis, and characterization of urinary tract calculi. Conclusion. CT has a valuable role when utilized prudently during surveillance of patients following endourological therapy. In this paper, we outline the basic principles relating to the effects of exposure to ionizing radiation as a result of CT scanning. We discuss the current developments in low-dose CT technology, which have resulted in significant reductions in CT radiation doses (to approximately one-third of what they were a decade ago) while preserving image quality. Finally, we will discuss an important recent development now commercially available on the latest generation of CT scanners, namely, dual energy imaging, which is showing promise in urinary tract imaging as a means of characterizing the composition of urinary tract calculi.
Resumo:
Prior work of our research group, that quantified the alarming levels of radiation dose to patients with Crohn’s disease from medical imaging and the notable shift towards CT imaging making these patients an at risk group, provided context for this work. CT delivers some of the highest doses of ionising radiation in diagnostic radiology. Once a medical imaging examination is deemed justified, there is an onus on the imaging team to endeavour to produce diagnostic quality CT images at the lowest possible radiation dose to that patient. The fundamental limitation with conventional CT raw data reconstruction was the inherent coupling of administered radiation dose with observed image noise – the lower the radiation dose, the noisier the image. The renaissance, rediscovery and refinement of iterative reconstruction removes this limitation allowing either an improvement in image quality without increasing radiation dose or maintenance of image quality at a lower radiation dose compared with traditional image reconstruction. This thesis is fundamentally an exercise in optimisation in clinical CT practice with the objectives of assessment of iterative reconstruction as a method for improvement of image quality in CT, exploration of the associated potential for radiation dose reduction, and development of a new split dose CT protocol with the aim of achieving and validating diagnostic quality submillisiever t CT imaging in patients with Crohn’s disease. In this study, we investigated the interplay of user-selected parameters on radiation dose and image quality in phantoms and cadavers, comparing traditional filtered back projection (FBP) with iterative reconstruction algorithms. This resulted in the development of an optimised, refined and appropriate split dose protocol for CT of the abdomen and pelvis in clinical patients with Crohn’s disease allowing contemporaneous acquisition of both modified and conventional dose CT studies. This novel algorithm was then applied to 50 patients with a suspected acute complication of known Crohn’s disease and the raw data reconstructed with FBP, adaptive statistical iterative reconstruction (ASiR) and model based iterative reconstruction (MBIR). Conventional dose CT images with FBP reconstruction were used as the reference standard with which the modified dose CT images were compared in terms of radiation dose, diagnostic findings and image quality indices. As there are multiple possible user-selected strengths of ASiR available, these were compared in terms of image quality to determine the optimal strength for this modified dose CT protocol. Modified dose CT images with MBIR were also compared with contemporaneous abdominal radiograph, where performed, in terms of diagnostic yield and radiation dose. Finally, attenuation measurements in organs, tissues, etc. with each reconstruction algorithm were compared to assess for preservation of tissue characterisation capabilities. In the phantom and cadaveric models, both forms of iterative reconstruction examined (ASiR and MBIR) were superior to FBP across a wide variety of imaging protocols, with MBIR superior to ASiR in all areas other than reconstruction speed. We established that ASiR appears to work to a target percentage noise reduction whilst MBIR works to a target residual level of absolute noise in the image. Modified dose CT images reconstructed with both ASiR and MBIR were non-inferior to conventional dose CT with FBP in terms of diagnostic findings, despite reduced subjective and objective indices of image quality. Mean dose reductions of 72.9-73.5% were achieved with the modified dose protocol with a mean effective dose of 1.26mSv. MBIR was again demonstrated superior to ASiR in terms of image quality. The overall optimal ASiR strength for the modified dose protocol used in this work is ASiR 80%, as this provides the most favourable balance of peak subjective image quality indices with less objective image noise than the corresponding conventional dose CT images reconstructed with FBP. Despite guidelines to the contrary, abdominal radiographs are still often used in the initial imaging of patients with a suspected complication of Crohn’s disease. We confirmed the superiority of modified dose CT with MBIR over abdominal radiographs at comparable doses in detection of Crohn’s disease and non-Crohn’s disease related findings. Finally, we demonstrated (in phantoms, cadavers and in vivo) that attenuation values do not change significantly across reconstruction algorithms meaning preserved tissue characterisation capabilities with iterative reconstruction. Both adaptive statistical and model based iterative reconstruction algorithms represent feasible methods of facilitating acquisition diagnostic quality CT images of the abdomen and pelvis in patients with Crohn’s disease at markedly reduced radiation doses. Our modified dose CT protocol allows dose savings of up to 73.5% compared with conventional dose CT, meaning submillisievert imaging is possible in many of these patients.
Resumo:
The police use both subjective (i.e. police staff) and automated (e.g. face recognition systems) methods for the completion of visual tasks (e.g person identification). Image quality for police tasks has been defined as the image usefulness, or image suitability of the visual material to satisfy a visual task. It is not necessarily affected by any artefact that may affect the visual image quality (i.e. decrease fidelity), as long as these artefacts do not affect the relevant useful information for the task. The capture of useful information will be affected by the unconstrained conditions commonly encountered by CCTV systems such as variations in illumination and high compression levels. The main aim of this thesis is to investigate aspects of image quality and video compression that may affect the completion of police visual tasks/applications with respect to CCTV imagery. This is accomplished by investigating 3 specific police areas/tasks utilising: 1) the human visual system (HVS) for a face recognition task, 2) automated face recognition systems, and 3) automated human detection systems. These systems (HVS and automated) were assessed with defined scene content properties, and video compression, i.e. H.264/MPEG-4 AVC. The performance of imaging systems/processes (e.g. subjective investigations, performance of compression algorithms) are affected by scene content properties. No other investigation has been identified that takes into consideration scene content properties to the same extend. Results have shown that the HVS is more sensitive to compression effects in comparison to the automated systems. In automated face recognition systems, `mixed lightness' scenes were the most affected and `low lightness' scenes were the least affected by compression. In contrast the HVS for the face recognition task, `low lightness' scenes were the most affected and `medium lightness' scenes the least affected. For the automated human detection systems, `close distance' and `run approach' are some of the most commonly affected scenes. Findings have the potential to broaden the methods used for testing imaging systems for security applications.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
The print substrate influences the print result in dry toner electrophotography, which is a widely used digital printing method. The influence of the substrate can be seen more easily in color printing, as that is a more complex process compared to monochrome printing. However, the print quality is also affected by the print substrate in grayscale printing. It is thus in the interests of both substrate producers and printing equipment manufacturers to understand the substrate properties that influence the quality of printed images in more detail. In dry toner electrophotography, the image is printed by transferring charged toner particles to the print substrate in the toner transfer nip, utilizing an electric field, in addition to the forces linked to the contact between toner particles and substrate in the nip. The toner transfer and the resulting image quality are thus influenced by the surface texture and the electrical and dielectric properties of the print substrate. In the investigation of the electrical and dielectric properties of the papers and the effects of substrate roughness, in addition to commercial papers, controlled sample sets were made on pilot paper machines and coating machines to exclude uncontrolled variables from the experiments. The electrical and dielectric properties of the papers investigated were electrical resistivity and conductivity, charge acceptance, charge decay, and the dielectric permittivity and losses at different frequencies, including the effect of temperature. The objective was to gain an understanding of how the electrical and dielectric properties are affected by normal variables in papermaking, including basis weight, material density, filler content, ion and moisture contents, and coating. In addition, the dependency of substrate resistivity on the electric field applied was investigated. Local discharging did not inhibit transfer with the paper roughness levels that are normal in electrophotographic color printing. The potential decay of paper revealed that the charge decay cannot be accurately described with a single exponential function, since in charge decay there are overlapping mechanisms of conduction and depolarization of paper. The resistivity of the paper depends on the NaCl content and exponentially on moisture content although it is also strongly dependent on the electric field applied. This dependency is influenced by the thickness, density, and filler contents of the paper. Furthermore, the Poole-Frenkel model can be applied to the resistivity of uncoated paper. The real part of the dielectric constant ε’ increases with NaCl content and relative humidity, but when these materials cannot polarize freely, the increase cannot be explained by summing the effects of their dielectric constants. Dependencies between the dielectric constant and dielectric loss factor and NaCl content, temperature, and frequency show that in the presence of a sufficient amount of moisture and NaCl, new structures with a relaxation time of the order of 10-3 s are formed in paper. The ε’ of coated papers is influenced by the addition of pigments and other coating additives with polarizable groups and due to the increase in density. The charging potential decreases and the electrical conductivity, potential decay rate, and dielectric constant of paper increase with increasing temperature. The dependencies are exponential and the temperature dependencies and their activation energies are altered by the ion content. The results have been utilized in manufacturing substrates for electrophotographic color printing.
Resumo:
Background - Image blurring in Full Field Digital Mammography (FFDM) is reported to be a problem within many UK breast screening units resulting in significant proportion of technical repeats/recalls. Our study investigates monitors of differing pixel resolution, and whether there is a difference in blurring detection between a 2.3 MP technical review monitor and a 5MP standard reporting monitor. Methods - Simulation software was created to induce different magnitudes of blur on 20 artifact free FFDM screening images. 120 blurred and non-blurred images were randomized and displayed on the 2.3 and 5MP monitors; they were reviewed by 28 trained observers. Monitors were calibrated to the DICOM Grayscale Standard Display Function. T-test was used to determine whether significant differences exist in blurring detection between the monitors. Results - The blurring detection rate on the 2.3MP monitor for 0.2, 0.4, 0.6, 0.8 and 1 mm blur was 46, 59, 66, 77and 78% respectively; and on the 5MP monitor 44, 70, 83 , 96 and 98%. All the non-motion images were identified correctly. A statistical difference (p <0.01) in the blurring detection rate between the two monitors was demonstrated. Conclusions - Given the results of this study and knowing that monitors as low as 1 MP are used in clinical practice, we speculate that technical recall/repeat rates because of blurring could be reduced if higher resolution monitors are used for technical review at the time of imaging. Further work is needed to determine monitor minimum specification for visual blurring detection.
Resumo:
Mammography equipment must be evaluated to ensure that images will be of acceptable diagnostic quality with lowest radiation dose. Quality Assurance (QA) aims to provide systematic and constant improvement through a feedback mechanism to address the technical, clinical and training aspects. Quality Control (QC), in relation to mammography equipment, comprises a series of tests to determine equipment performance characteristics. The introduction of digital technologies promoted changes in QC tests and protocols and there are some tests that are specific for each manufacturer. Within each country specifi c QC tests should be compliant with regulatory requirements and guidance. Ideally, one mammography practitioner should take overarching responsibility for QC within a service, with all practitioners having responsibility for actual QC testing. All QC results must be documented to facilitate troubleshooting, internal audit and external assessment. Generally speaking, the practitioner’s role includes performing, interpreting and recording the QC tests as well as reporting any out of action limits to their service lead. They must undertake additional continuous professional development to maintain their QC competencies. They are usually supported by technicians and medical physicists; in some countries the latter are mandatory. Technicians and/or medical physicists often perform many of the tests indicated within this chapter. It is important to recognise that this chapter is an attempt to encompass the main tests performed within European countries. Specific tests related to the service that you work within must be familiarised with and adhered too.
Resumo:
PURPOSE: The goal of the present study was to use a three-dimensional (3D) gradient echo volume in combination with a fat-selective excitation as a 3D motion navigator (3D FatNav) for retrospective correction of microscopic head motion during high-resolution 3D structural scans of extended duration. The fat excitation leads to a 3D image that is itself sparse, allowing high parallel imaging acceleration factors - with the additional advantage of a minimal disturbance of the water signal used for the host sequence. METHODS: A 3D FatNav was inserted into two structural protocols: an inversion-prepared gradient echo at 0.33 × 0.33 × 1.00 mm resolution and a turbo spin echo at 600 μm isotropic resolution. RESULTS: Motion estimation was possible with high precision, allowing retrospective motion correction to yield clear improvements in image quality, especially in the conspicuity of very small blood vessels. CONCLUSION: The highly accelerated 3D FatNav allowed motion correction with noticeable improvements in image quality, even for head motion which was small compared with the voxel dimensions of the host sequence. Magn Reson Med 75:1030-1039, 2016. © 2015 Wiley Periodicals, Inc.