20 resultados para Image processing technique


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To compare graticule and image capture assessment of the lower tear film meniscus height (TMH). Methods: Lower tear film meniscus height measures were taken in the right eyes of 55 healthy subjects at two study visits separated by 6 months. Two images of the TMH were captured in each subject with a digital camera attached to a slit-lamp biomicroscope and stored in a computer for future analysis. Using the best of two images, the TMH was quantified by manually drawing a line across the tear meniscus profile, following which the TMH was measured in pixels and converted into millimetres, where one pixel corresponded to 0.0018 mm. Additionally, graticule measures were carried out by direct observation using a calibrated graticule inserted into the same slit-lamp eyepiece. The graticule was calibrated so that actual readings, in 0.03 mm increments, could be made with a 40× ocular. Results: Smaller values of TMH were found in this study compared to previous studies. TMH, as measured with the image capture technique (0.13 ± 0.04 mm), was significantly greater (by approximately 0.01 ± 0.05 mm, p = 0.03) than that measured with the graticule technique (0.12 ± 0.05 mm). No bias was found across the range sampled. Repeatability of the TMH measurements taken at two study visits showed that graticule measures were significantly different (0.02 ± 0.05 mm, p = 0.01) and highly correlated (r = 0.52, p < 0.0001), whereas image capture measures were similar (0.01 ± 0.03 mm, p = 0.16), and also highly correlated (r = 0.56, p < 0.0001). Conclusions: Although graticule and image analysis techniques showed similar mean values for TMH, the image capture technique was more repeatable than the graticule technique and this can be attributed to the higher measurement resolution of the image capture (i.e. 0.0018 mm) compared to the graticule technique (i.e. 0.03 mm). © 2006 British Contact Lens Association.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A sizeable amount of the testing in eye care, requires either the identification of targets such as letters to assess functional vision, or the subjective evaluation of imagery by an examiner. Computers can render a variety of different targets on their monitors and can be used to store and analyse ophthalmic images. However, existing computing hardware tends to be large, screen resolutions are often too low, and objective assessments of ophthalmic images unreliable. Recent advances in mobile computing hardware and computer-vision systems can be used to enhance clinical testing in optometry. High resolution touch screens embedded in mobile devices, can render targets at a wide variety of distances and can be used to record and respond to patient responses, automating testing methods. This has opened up new opportunities in computerised near vision testing. Equally, new image processing techniques can be used to increase the validity and reliability of objective computer vision systems. Three novel apps for assessing reading speed, contrast sensitivity and amplitude of accommodation were created by the author to demonstrate the potential of mobile computing to enhance clinical measurement. The reading speed app could present sentences effectively, control illumination and automate the testing procedure for reading speed assessment. Meanwhile the contrast sensitivity app made use of a bit stealing technique and swept frequency target, to rapidly assess a patient’s full contrast sensitivity function at both near and far distances. Finally, customised electronic hardware was created and interfaced to an app on a smartphone device to allow free space amplitude of accommodation measurement. A new geometrical model of the tear film and a ray tracing simulation of a Placido disc topographer were produced to provide insights on the effect of tear film breakdown on ophthalmic images. Furthermore, a new computer vision system, that used a novel eye-lash segmentation technique, was created to demonstrate the potential of computer vision systems for the clinical assessment of tear stability. Studies undertaken by the author to assess the validity and repeatability of the novel apps, found that their repeatability was comparable to, or better, than existing clinical methods for reading speed and contrast sensitivity assessment. Furthermore, the apps offered reduced examination times in comparison to their paper based equivalents. The reading speed and amplitude of accommodation apps correlated highly with existing methods of assessment supporting their validity. Their still remains questions over the validity of using a swept frequency sine-wave target to assess patient’s contrast sensitivity functions as no clinical test provides the range of spatial frequencies and contrasts, nor equivalent assessment at distance and near. A validation study of the new computer vision system found that the authors tear metric correlated better with existing subjective measures of tear film stability than those of a competing computer-vision system. However, repeatability was poor in comparison to the subjective measures due to eye lash interference. The new mobile apps, computer vision system, and studies outlined in this thesis provide further insight into the potential of applying mobile and image processing technology to enhance clinical testing by eye care professionals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The nonlinear inverse synthesis (NIS) method, in which information is encoded directly onto the continuous part of the nonlinear signal spectrum, has been proposed recently as a promising digital signal processing technique for combating fiber nonlinearity impairments. However, because the NIS method is based on the integrability property of the lossless nonlinear Schrödinger equation, the original approach can only be applied directly to optical links with ideal distributed Raman amplification. In this paper, we propose and assess a modified scheme of the NIS method, which can be used effectively in standard optical links with lumped amplifiers, such as, erbium-doped fiber amplifiers (EDFAs). The proposed scheme takes into account the average effect of the fiber loss to obtain an integrable model (lossless path-averaged model) to which the NIS technique is applicable. We found that the error between lossless pathaveraged and lossy models increases linearly with transmission distance and input power (measured in dB). We numerically demonstrate the feasibility of the proposed NIS scheme in a burst mode with orthogonal frequency division multiplexing (OFDM) transmission scheme with advanced modulation formats (e.g., QPSK, 16QAM, and 64QAM), showing a performance improvement up to 3.5 dB; these results are comparable to those achievable with multi-step per span digital backpropagation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Efficient and effective approaches of dealing with the vast amount of visual information available nowadays are highly sought after. This is particularly the case for image collections, both personal and commercial. Due to the magnitude of these ever expanding image repositories, annotation of all images images is infeasible, and search in such an image collection therefore becomes inherently difficult. Although content-based image retrieval techniques have shown much potential, such approaches also suffer from various problems making it difficult to adopt them in practice. In this paper, we follow a different approach, namely that of browsing image databases for image retrieval. In our Honeycomb Image Browser, large image databases are visualised on a hexagonal lattice with image thumbnails occupying hexagons. Arranged in a space filling manner, visually similar images are located close together enabling large image datasets to be navigated in a hierarchical manner. Various browsing tools are incorporated to allow for interactive exploration of the database. Experimental results confirm that our approach affords efficient image retrieval. © 2010 IEEE.