23 resultados para Pixels
Resumo:
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at × 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 × 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
Resumo:
Accelerated soil erosion is an aspect of dryland degradation that is affected by repeated intense drought events and land management activities such as commercial livestock grazing. A soil stability index (SSI) that detects the erosion status and susceptibility of a landscape at the pixel level, i.e., stable, erosional, or depositional pixels, was derived from the spectral properties of an archived time series (from 1972 to 1997) of Landsat satellite data of a commercial ranch in northeastern Utah. The SSI was retrospectively validated with contemporary field measures of soil organic matter and erosion status that was surveyed by US federal land management agencies. Catastrophe theory provided the conceptual framework for retrospective assessment of the impact of commercial grazing and soil water availability on the SSI. The overall SSI trend was from an eroding landscape in the early drier 1970s towards stable conditions in the wetter mid-1980s and late 1990s. The landscape catastrophically shifted towards an extreme eroding state that was coincident with the “The Great North American Drought of 1988”. Periods of landscape stability and trajectories toward stability were coincident with extremely wet El Niño events. Commercial grazing had less correlation with soil stability than drought conditions. However, the landscape became more susceptible to erosion events under multiple droughts and grazing. Land managers now have nearly a year warning of El Niño and La Niña events and can adjust their management decisions according to predicted landscape erosion conditions.
Resumo:
In a typical shoeprint classification and retrieval system, the first step is to segment meaningful basic shapes and patterns in a noisy shoeprint image. This step has significant influence on shape descriptors and shoeprint indexing in the later stages. In this paper, we extend a recently developed denoising technique proposed by Buades, called non-local mean filtering, to give a more general model. In this model, the expected result of an operation on a pixel can be estimated by performing the same operation on all of its reference pixels in the same image. A working pixel’s reference pixels are those pixels whose neighbourhoods are similar to the working pixel’s neighbourhood. Similarity is based on the correlation between the local neighbourhoods of the working pixel and the reference pixel. We incorporate a special instance of this general case into thresholding a very noisy shoeprint image. Visual and quantitative comparisons with two benchmarking techniques, by Otsu and Kittler, are conducted in the last section, giving evidence of the effectiveness of our method for thresholding noisy shoeprint images.
Resumo:
The use of image processing techniques to assess the performance of airport landing lighting using images of it collected from an aircraft-mounted camera is documented. In order to assess the performance of the lighting, it is necessary to uniquely identify each luminaire within an image and then track the luminaires through the entire sequence and store the relevant information for each luminaire, that is, the total number of pixels that each luminaire covers and the total grey level of these pixels. This pixel grey level can then be used for performance assessment. The authors propose a robust model-based (MB) featurematching technique by which the performance is assessed. The development of this matching technique is the key to the automated performance assessment of airport lighting. The MB matching technique utilises projective geometry in addition to accurate template of the 3D model of a landing-lighting system. The template is projected onto the image data and an optimum match found, using nonlinear least-squares optimisation. The MB matching software is compared with standard feature extraction and tracking techniques known within the community, these being the Kanade–Lucus–Tomasi (KLT) and scaleinvariant feature transform (SIFT) techniques. The new MB matching technique compares favourably with the SIFT and KLT feature-tracking alternatives. As such, it provides a solid foundation to achieve the central aim of this research which is to automatically assess the performance of airport lighting.
Resumo:
In this paper, the compression of multispectral images is addressed. Such 3-D data are characterized by a high correlation across the spectral components. The efficiency of the state-of-the-art wavelet-based coder 3-D SPIHT is considered. Although the 3-D SPIHT algorithm provides the obvious way to process a multispectral image as a volumetric block and, consequently, maintain the attractive properties exhibited in 2-D (excellent performance, low complexity, and embeddedness of the bit-stream), its 3-D trees structure is shown to be not adequately suited for 3-D wavelet transformed (DWT) multispectral images. The fact that each parent has eight children in the 3-D structure considerably increases the list of insignificant sets (LIS) and the list of insignificant pixels (LIP) since the partitioning of any set produces eight subsets which will be processed similarly during the sorting pass. Thus, a significant portion from the overall bit-budget is wastedly spent to sort insignificant information. Through an investigation based on results analysis, we demonstrate that a straightforward 2-D SPIHT technique, when suitably adjusted to maintain the rate scalability and carried out in the 3-D DWT domain, overcomes this weakness. In addition, a new SPIHT-based scalable multispectral image compression algorithm is used in the initial iterations to exploit the redundancies within each group of two consecutive spectral bands. Numerical experiments on a number of multispectral images have shown that the proposed scheme provides significant improvements over related works.
Resumo:
Magnetic bright points (MBPs) in the internetwork are among the smallest objects in the solar photosphere and appear bright against the ambient environment. An algorithm is presented that can be used for the automated detection of the MBPs in the spatial and temporal domains. The algorithm works by mapping the lanes through intensity thresholding. A compass search, combined with a study of the intensity gradient across the detected objects, allows the disentanglement of MBPs from bright pixels within the granules. Object growing is implemented to account for any pixels that might have been removed when mapping the lanes. The images are stabilized by locating long-lived objects that may have been missed due to variable light levels and seeing quality. Tests of the algorithm, employing data taken with the Swedish Solar Telescope, reveal that approximate to 90 per cent of MBPs within a 75 x 75 arcsec(2) field of view are detected.
Resumo:
A novel image segmentation method based on a constraint satisfaction neural network (CSNN) is presented. The new method uses CSNN-based relaxation but with a modified scanning scheme of the image. The pixels are visited with more distant intervals and wider neighborhoods in the first level of the algorithm. The intervals between pixels and their neighborhoods are reduced in the following stages of the algorithm. This method contributes to the formation of more regular segments rapidly and consistently. A cluster validity index to determine the number of segments is also added to complete the proposed method into a fully automatic unsupervised segmentation scheme. The results are compared quantitatively by means of a novel segmentation evaluation criterion. The results are promising.
Resumo:
Apparatus for scanning a moving object includes a visible waveband sensor oriented to collect a series of images of the object as it passes through a field of view. An image processor uses the series of images to form a composite image. The image processor stores image pixel data for a current image and predecessor image in the series. It uses information in the current image and its predecessor to analyse images and derive likelihood measures indicating probabilities that current image pixels correspond to parts of the object. The image processor estimates motion between the current image and its predecessor from likelihood weighted pixels. It generates the composite image from frames positioned according to respective estimates of object image motion. Image motion may alternatively be detected be a speed sensor such as Doppler radar sensing object motion directly and providing image timing signals
Resumo:
A new, front-end image processing chip is presented for real-time small object detection. It has been implemented using a 0.6 µ, 3.3 V CMOS technology and operates on 10-bit input data at 54 megasamples per second. It occupies an area of 12.9 mm×13.6 mm (including pads), dissipates 1.5 W, has 92 I/O pins and is to be housed in a 160-pin ceramic quarter flat-pack. It performs both one- and two-dimensional FIR filtering and a multilayer perceptron (MLP) neural network function using a reconfigurable array of 21 multiplication-accumulation cells which corresponds to a window size of 7×3. The chip can cope with images of 2047 pixels per line and can be cascaded to cope with larger window sizes. The chip performs two billion fixed point multiplications and additions per second.
Resumo:
The POINT-AGAPE (Pixel-lensing Observations with the Isaac Newton Telescope-Andromeda Galaxy Amplified Pixels Experiment) survey is an optical search for gravitational microlensing events towards the Andromeda galaxy (M31). As well as microlensing, the survey is sensitive to many different classes of variable stars and transients. In our first paper of this series, we reported the detection of 20 classical novae (CNe) observed in Sloan r' and i' passbands.
Resumo:
The POINT-AGAPE (Pixel-lensing Observations with the Isaac Newton Telescope-Andromeda Galaxy Amplified Pixels Experiment) survey is an optical search for gravitational microlensing events towards the Andromeda galaxy (M31). As well as microlensing, the survey is sensitive to many different classes of variable stars and transients. Here we describe the automated detection and selection pipeline used to identify M31 classical novae (CNe) and we present the resulting catalogue of 20 CN candidates observed over three seasons. CNe are observed both in the bulge region as well as over a wide area of the M31 disc. Nine of the CNe are caught during the final rise phase and all are well sampled in at least two colours. The excellent light-curve coverage has allowed us to detect and classify CNe over a wide range of speed class, from very fast to very slow. Among the light curves is a moderately fast CN exhibiting entry into a deep transition minimum, followed by its final decline. We have also observed in detail a very slow CN which faded by only 0.01 mag d(-1) over a 150-d period. We detect other interesting variable objects, including one of the longest period and most luminous Mira variables. The CN catalogue constitutes a uniquely well-sampled and objectively-selected data set with which to study the statistical properties of CNe in M31, such as the global nova rate, the reliability of novae as standard-candle distance indicators and the dependence of the nova population on stellar environment. The findings of this statistical study will be reported in a follow-up paper.
Resumo:
In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. Firstly, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a Probability color Density Image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(z|x) using the Integral Image of the PDI.
Resumo:
An approach to spatialization is described in which the pixels of an image determine both spatial and other attributes of individual elements in a multi-channel musical texture. The application of this technique in the author’s composition Spaced Images with Noise and Lines is discussed in detail. The relationship of this technique to existing image-to-sound mappings is discussed. The particular advantage of modifying spatial properties with image filters is considered.
Resumo:
Aim - To describe a new method of evaluating the topographic distribution of fundus autofluorescence in eyes with retinal disease. Methods - Images of fundus autofluorescence were obtained in five patients and 34 normal volunteers using a confocal scanning laser ophthalmoscope (cSLO). To evaluate the topographic distribution of fundus autofluorescence throughout the posterior pole a rectangular box, 10 x 750 pixels, was used as the area of analysis. The box was placed, horizontally, across the macular region. The intensity of fundus autofluorescence of each pixel within the rectangular box was plotted against its degree of eccentricity. Profiles of fundus autofluorescence from patients were compared with those obtained from the age matched control group and with cSLO images. Results - Profiles of fundus autofluorescence appeared to represent the topographic distribution of fundus autofluorescence throughout the posterior pole appreciated in the cSLO images, and allowed rapid identification and quantification of areas of increased or decreased fundus autofluorescence. Conclusions - Fundus autofluorescence profiles appear to be useful to study the spatial distribution of fundus autofluorescence in eyes with retinal disease.
Resumo:
In recent years, gradient vector flow (GVF) based algorithms have been successfully used to segment a variety of 2-D and 3-D imagery. However, due to the compromise of internal and external energy forces within the resulting partial differential equations, these methods may lead to biased segmentation results. In this paper, we propose MSGVF, a mean shift based GVF segmentation algorithm that can successfully locate the correct borders. MSGVF is developed so that when the contour reaches equilibrium, the various forces resulting from the different energy terms are balanced. In addition, the smoothness constraint of image pixels is kept so that over- or under-segmentation can be reduced. Experimental results on publicly accessible datasets of dermoscopic and optic disc images demonstrate that the proposed method effectively detects the borders of the objects of interest.