69 resultados para Image texture analysis
Resumo:
In this paper we propose a novel automated glaucoma detection framework for mass-screening that operates on inexpensive retinal cameras. The proposed methodology is based on the assumption that discriminative features for glaucoma diagnosis can be extracted from the optical nerve head structures,
such as the cup-to-disc ratio or the neuro-retinal rim variation. After automatically segmenting the cup and optical disc, these features are feed into a machine learning classifier. Experiments were performed using two different datasets and from the obtained results the proposed technique provides
better performance than approaches based on appearance. A main advantage of our approach is that it only requires a few training samples to provide high accuracy over several different glaucoma stages.
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods.
This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state-of-the-art registration methodologies used in a variety of targeted applications.
Key features:
- Provides a state-of-the-art review of image and video registration techniques, allowing readers to develop an understanding of how well the techniques perform by using specific quality assessment criteria
- Addresses a range of applications from familiar image and video processing domains to satellite and medical imaging among others, enabling readers to discover novel methodologies with utility in their own research
- Discusses quality evaluation metrics for each application domain with an interdisciplinary approach from different research perspectives
Resumo:
AIMS: To assess quantitatively variations in the extent of capillary basement membrane (BM) thickening between different retinal layers and within arterial and venous environments during diabetes.
METHODS: One year after induction of experimental (streptozotocin) diabetes in rats, six diabetic animals together with six age-matched control animals were sacrificed and the retinas fixed for transmission electron microscopy (TEM). Blocks of retina straddling the major arteries and veins in the central retinal were dissected out, embedded in resin, and sectioned. Capillaries in close proximity to arteries or veins were designated as residing in either an arterial (AE) or a venous (VE) environment respectively, and the retinal layer in which each capillary was located was also noted. The thickness of the BM was then measured on an image analyser based two dimensional morphometric analysis system.
RESULTS: In both diabetics and controls the AE capillaries had consistently thicker BMs than the VE capillaries. The BMs of both AE and VE capillaries from diabetics were thicker than those of capillaries in the corresponding retinal layer from the normal rats (p < or = 0.005). Also, in normal AE and VE capillaries and diabetic AE capillaries the BM in the nerve fibre layer (NFL) was thicker than that in either the inner (IPL) or outer (OPL) plexiform layers (p < or = 0.001). However, in diabetic VE capillaries the BMs of capillaries in the NFL were thicker than those of capillaries in the IPL (p < or = 0.05) which, in turn, had thicker BMs than capillaries in the OPL (p < or = 0.005).
CONCLUSIONS: The variation in the extent of capillary BM thickening between different retinal layers within AE and VE environments may be related to differences in levels of oxygen tension and oxidative stress in the retina around arteries compared with that around veins.
Resumo:
The discovery and clinical application of molecular biomarkers in solid tumors, increasingly relies on nucleic acid extraction from FFPE tissue sections and subsequent molecular profiling. This in turn requires the pathological review of haematoxylin & eosin (H&E) stained slides, to ensure sample quality, tumor DNA sufficiency by visually estimating the percentage tumor nuclei and tumor annotation for manual macrodissection. In this study on NSCLC, we demonstrate considerable variation in tumor nuclei percentage between pathologists, potentially undermining the precision of NSCLC molecular evaluation and emphasising the need for quantitative tumor evaluation. We subsequently describe the development and validation of a system called TissueMark for automated tumor annotation and percentage tumor nuclei measurement in NSCLC using computerized image analysis. Evaluation of 245 NSCLC slides showed precise automated tumor annotation of cases using Tissuemark, strong concordance with manually drawn boundaries and identical EGFR mutational status, following manual macrodissection from the image analysis generated tumor boundaries. Automated analysis of cell counts for % tumor measurements by Tissuemark showed reduced variability and significant correlation (p < 0.001) with benchmark tumor cell counts. This study demonstrates a robust image analysis technology that can facilitate the automated quantitative analysis of tissue samples for molecular profiling in discovery and diagnostics.
Resumo:
Taking in recent advances in neuroscience and digital technology, Gander and Garland assess the state of the inter-arts in America and the Western world, exploring and questioning the primacy of affect in an increasingly hypertextual everyday environment. In this analysis they signal a move beyond W. J. T. Mitchell’s coinage of the ‘imagetext’ to an approach that centres the reader-viewer in a recognition, after John Dewey, of ‘art as experience’. New thinking in cognitive and computer sciences about the relationship between the body and the mind challenges any established definitions of ‘embodiment’, ‘materiality’, ‘virtuality’ and even ‘intelligence, they argue, whilst ‘Extended Mind Theory’, they note, marries our cognitive processes with the material forms with which we engage, confirming and complicating Marshall McLuhan’s insight, decades ago, that ‘all media are “extensions of man”’. In this chapter, Gander and Garland open paths and suggest directions into understandings and critical interpretations of new and emerging imagetext worlds and experiences.
Resumo:
PatchCity is a new approach to the procedural generation of city models. The algorithm uses texture synthesis to create a city layout in the visual style of one or more input examples. Data is provided in vector graphic form from either real or synthetic city definitions. The paper describes the PatchCity algorithm, illustrates its use, and identifies its strengths and limitations. The technique provides a greater range of features and styles of city layout than existing generative methods, thereby achieving results that are more realistic. An open source implementation of the algorithm is available.
Resumo:
We analyze the performance of amplify-and-forward dual-hop relaying systems in the presence of in-phase and quadrature-phase imbalance (IQI) at the relay node. In particular, an exact analytical expression for and tight lower bounds on the outage probability are derived over independent, non-identically distributed Nakagami-m fading channels. Moreover, tractable upper and lower bounds on the ergodic capacity are presented at arbitrary signal-to-noise ratios (SNRs). Some special cases of practical interest (e.g., Rayleigh and Nakagami-0.5 fading) are also studied. An asymptotic analysis is performed in the high SNR regime, where we observe that IQI results in a ceiling effect on the signal-to-interference-plus-noise ratio (SINR), which depends only on the level of I/Q impairments, i.e., the joint image rejection ratio. Finally, the optimal I/Q amplitude and phase mismatch parameters are provided for maximizing the SINR ceiling, thus improving the system performance. An interesting observation is that, under a fixed total phase mismatch constraint, it is optimal to have the same level of transmitter (TX) and receiver (RX) phase mismatch at the relay node, while the optimal values for the TX and RX amplitude mismatch should be inversely proportional to each other.
Resumo:
PURPOSE: To provide a tool to enable gamma analysis software algorithms to be included in a quality assurance (QA) program.
METHODS: Four image sets were created comprising two geometric images to independently test the distance to agreement (DTA) and dose difference (DD) elements of the gamma algorithm, a clinical step and shoot IMRT field and a clinical VMAT arc. The images were analysed using global and local gamma analysis with 2 in-house and 8 commercially available software encompassing 15 software versions. The effect of image resolution on gamma pass rates was also investigated.
RESULTS: All but one software accurately calculated the gamma passing rate for the geometric images. Variation in global gamma passing rates of 1% at 3%/3mm and over 2% at 1%/1mm was measured between software and software versions with analysis of appropriately sampled images.
CONCLUSION: This study provides a suite of test images and the gamma pass rates achieved for a selection of commercially available software. This image suite will enable validation of gamma analysis software within a QA program and provide a frame of reference by which to compare results reported in the literature from various manufacturers and software versions.
Resumo:
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost.