502 resultados para Pixels


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the challenges of flood mapping using multispectral images. Quantitative flood mapping is critical for flood damage assessment and management. Remote sensing images obtained from various satellite or airborne sensors provide valuable data for this application, from which the information on the extent of flood can be extracted. However the great challenge involved in the data interpretation is to achieve more reliable flood extent mapping including both the fully inundated areas and the 'wet' areas where trees and houses are partly covered by water. This is a typical combined pure pixel and mixed pixel problem. In this paper, an extended Support Vector Machines method for spectral unmixing developed recently has been applied to generate an integrated map showing both pure pixels (fully inundated areas) and mixed pixels (trees and houses partly covered by water). The outputs were compared with the conventional mean based linear spectral mixture model, and better performance was demonstrated with a subset of Landsat ETM+ data recorded at the Daly River Basin, NT, Australia, on 3rd March, 2008, after a flood event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most difficult operation in the flood inundation mapping using optical flood images is to separate fully inundated areas from the ‘wet’ areas where trees and houses are partly covered by water. This can be referred as a typical problem the presence of mixed pixels in the images. A number of automatic information extraction image classification algorithms have been developed over the years for flood mapping using optical remote sensing images. Most classification algorithms generally, help in selecting a pixel in a particular class label with the greatest likelihood. However, these hard classification methods often fail to generate a reliable flood inundation mapping because the presence of mixed pixels in the images. To solve the mixed pixel problem advanced image processing techniques are adopted and Linear Spectral unmixing method is one of the most popular soft classification technique used for mixed pixel analysis. The good performance of linear spectral unmixing depends on two important issues, those are, the method of selecting endmembers and the method to model the endmembers for unmixing. This paper presents an improvement in the adaptive selection of endmember subset for each pixel in spectral unmixing method for reliable flood mapping. Using a fixed set of endmembers for spectral unmixing all pixels in an entire image might cause over estimation of the endmember spectra residing in a mixed pixel and hence cause reducing the performance level of spectral unmixing. Compared to this, application of estimated adaptive subset of endmembers for each pixel can decrease the residual error in unmixing results and provide a reliable output. In this current paper, it has also been proved that this proposed method can improve the accuracy of conventional linear unmixing methods and also easy to apply. Three different linear spectral unmixing methods were applied to test the improvement in unmixing results. Experiments were conducted in three different sets of Landsat-5 TM images of three different flood events in Australia to examine the method on different flooding conditions and achieved satisfactory outcomes in flood mapping.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ALICE (A Large Ion Collider Experiment) is the LHC (Large Hadron Collider) experiment devoted to investigating the strongly interacting matter created in nucleus-nucleus collisions at the LHC energies. The ALICE ITS, Inner Tracking System, consists of six cylindrical layers of silicon detectors with three different technologies; in the outward direction: two layers of pixel detectors, two layers each of drift, and strip detectors. The number of parameters to be determined in the spatial alignment of the 2198 sensor modules of the ITS is about 13,000. The target alignment precision is well below 10 micron in some cases (pixels). The sources of alignment information include survey measurements, and the reconstructed tracks from cosmic rays and from proton-proton collisions. The main track-based alignment method uses the Millepede global approach. An iterative local method was developed and used as well. We present the results obtained for the ITS alignment using about 10^5 charged tracks from cosmic rays that have been collected during summer 2008, with the ALICE solenoidal magnet switched off.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of non-invasive characterization of elastic properties of soft biological tissues has been a focus of active researches since recent years. Light is highly scattered by biological tissues and hence, sophisticated reconstruction algorithms are required to achieve good imaging depth and a reasonable resolution. Ultrasound (US), on the otherhand, is less scattered by soft tissues and it has been in use for imaging in biomedical ultrasound systems. Combination of the contrast sensitivity of light and good localization of ultrasound provides a challenging technique for characterization of thicker tissues deep inside the body non-invasively. The elasticity of the tissues is characterized by studying the response of tissues to mechanical excitation induced by an acoustic radiation force (remotely) using an optical laser. The US modulated optical signals which traverse the tissue are detected by using a CCD camera as detector array and the pixel map formed on the CCD is used to characterize the embedded inhomogeneities. The use of CCD camera improves the signal-noise-ratio (SNR) by averaging the signals from all of the CCD pixels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fetal lung and liver tissues were examined by ultrasound in 240 subjects during 24 to 38 weeks of gestational age in order to investigate the feasibility of predicting the maturity of the lung from the textural features of sonograms. A region of interest of 64 X 64 pixels is used for extracting textural features. Since the histological properties of the liver are claimed to remain constant with respect to gestational age, features obtained from the lung region are compared with those from liver. Though the mean values of some of the features show a specific trend with respect to gestation age, the variance is too high to guarantee definite prediction of the gestational age. Thus, we restricted our purview to an investigation into the feasibility of fetal lung maturity prediction using statistical textural features. Out of 64 features extracted, those features that are correlated with gestation age and less computationally intensive are selected. The results of our study show that the sonographic features hold some promise in determining whether the fetal lung is mature or immature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last few decades, there has been a significant land cover (LC) change across the globe due to the increasing demand of the burgeoning population and urban sprawl. In order to take account of the change, there is a need for accurate and up- to-date LC maps. Mapping and monitoring of LC in India is being carried out at national level using multi-temporal IRS AWiFS data. Multispectral data such as IKONOS, Landsat- TM/ETM+, IRS-1C/D LISS-III/IV, AWiFS and SPOT-5, etc. have adequate spatial resolution (~ 1m to 56m) for LC mapping to generate 1:50,000 maps. However, for developing countries and those with large geographical extent, seasonal LC mapping is prohibitive with data from commercial sensors of limited spatial coverage. Superspectral data from the MODIS sensor are freely available, have better temporal (8 day composites) and spectral information. MODIS pixels typically contain a mixture of various LC types (due to coarse spatial resolution of 250, 500 and 1000 m), especially in more fragmented landscapes. In this context, linear spectral unmixing would be useful for mapping patchy land covers, such as those that characterise much of the Indian subcontinent. This work evaluates the existing unmixing technique for LC mapping using MODIS data, using end- members that are extracted through Pixel Purity Index (PPI), Scatter plot and N-dimensional visualisation. The abundance maps were generated for agriculture, built up, forest, plantations, waste land/others and water bodies. The assessment of the results using ground truth and a LISS-III classified map shows 86% overall accuracy, suggesting the potential for broad-scale applicability of the technique with superspectral data for natural resource planning and inventory applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Land cover (LC) refers to what is actually present on the ground and provide insights into the underlying solution for improving the conditions of many issues, from water pollution to sustainable economic development. One of the greatest challenges of modeling LC changes using remotely sensed (RS) data is of scale-resolution mismatch: that the spatial resolution of detail is less than what is required, and that this sub-pixel level heterogeneity is important but not readily knowable. However, many pixels consist of a mixture of multiple classes. The solution to mixed pixel problem typically centers on soft classification techniques that are used to estimate the proportion of a certain class within each pixel. However, the spatial distribution of these class components within the pixel remains unknown. This study investigates Orthogonal Subspace Projection - an unmixing technique and uses pixel-swapping algorithm for predicting the spatial distribution of LC at sub-pixel resolution. Both the algorithms are applied on many simulated and actual satellite images for validation. The accuracy on the simulated images is ~100%, while IRS LISS-III and MODIS data show accuracy of 76.6% and 73.02% respectively. This demonstrates the relevance of these techniques for applications such as urban-nonurban, forest-nonforest classification studies etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose the design and implementation of hardware architecture for spatial prediction based image compression scheme, which consists of prediction phase and quantization phase. In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates an error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. The software model is tested for its performance in terms of entropy, standard deviation. The memory and silicon area constraints play a vital role in the realization of the hardware for hand-held devices. The hardware architecture is constructed for the proposed scheme, which involves the aspects of parallelism in instructions and data. The processor consists of pipelined functional units to obtain the maximum throughput and higher speed of operation. The hardware model is analyzed for performance in terms throughput, speed and power. The results of hardware model indicate that the proposed architecture is suitable for power constrained implementations with higher data rate

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural hazards such as landslides are triggered by numerous factors such as ground movements, rock falls, slope failure, debris flows, slope instability, etc. Changes in slope stability happen due to human intervention, anthropogenic activities, change in soil structure, loss or absence of vegetation (changes in land cover), etc. Loss of vegetation happens when the forest is fragmented due to anthropogenic activities. Hence land cover mapping with forest fragmentation can provide vital information for visualising the regions that require immediate attention from slope stability aspects. The main objective of this paper is to understand the rate of change in forest landscape from 1973 to 2004 through multi-sensor remote sensing data analysis. The forest fragmentation index presented here is based on temporal land use information and forest fragmentation model, in which the forest pixels are classified as patch, transitional, edge, perforated, and interior, that give a measure of forest continuity. The analysis carried out for five prominent watersheds of Uttara Kannada district– Aganashini, Bedthi, Kali, Sharavathi and Venkatpura revealed that interior forest is continuously decreasing while patch, transitional, edge and perforated forest show increasing trend. The effect of forest fragmentation on landslide occurrence was visualised by overlaying the landslide occurrence points on classified image and forest fragmentation map. The increasing patch and transitional forest on hill slopes are the areas prone to landslides, evident from the field verification, indicating that deforestation is a major triggering factor for landslides. This emphasises the need for immediate conservation measures for sustainable management of the landscape. Quantifying and describing land use - land cover change and fragmentation is crucial for assessing the effect of land management policies and environmental protection decisions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation for extracting water-covered regions. Analysis of MODIS satellite images is applied in three stages: before flood, during flood and after flood. Water regions are extracted from the MODIS images using image classification (based on spectral information) and image segmentation (based on spatial information). Multi-temporal MODIS images from ``normal'' (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification (SVM and ANN) and region-based image segmentation is an accurate and reliable approach for the extraction of water-covered regions. (c) 2012 COSPAR. Published by Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A CMOS gas sensor array platform with digital read-out containing 27 sensor pixels and a reference pixel is presented. A signal conditioning circuit at each pixel includes digitally programmable gain stages for sensor signal amplification followed by a second order continuous time delta sigma modulator for digitization. Each sensor pixel can be functionalized with a distinct sensing material that facilitates transduction based on impedance change. Impedance spectrum (up to 10 KHz) of the sensor is obtained off-chip by computing the fast Fourier transform of sensor and reference pixel outputs. The reference pixel also compensates for the phase shift introduced by the signal processing circuits. The chip also contains a temperature sensor with digital readout for ambient temperature measurement. A sensor pixel is functionalized with polycarbazole conducting polymer for sensing volatile organic gases and measurement results are presented. The chip is fabricated in a 0.35 CMOS technology and requires a single step post processing for functionalization. It consumes 57 mW from a 3.3 V supply.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptive Gaussian Mixture Models (GMM) have been one of the most popular and successful approaches to perform foreground segmentation on multimodal background scenes. However, the good accuracy of the GMM algorithm comes at a high computational cost. An improved GMM technique was proposed by Zivkovic to reduce computational cost by minimizing the number of modes adaptively. In this paper, we propose a modification to his adaptive GMM algorithm that further reduces execution time by replacing expensive floating point computations with low cost integer operations. To maintain accuracy, we derive a heuristic that computes periodic floating point updates for the GMM weight parameter using the value of an integer counter. Experiments show speedups in the range of 1.33 - 1.44 on standard video datasets where a large fraction of pixels are multimodal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a set of metrics that evaluate the uniformity, sharpness, continuity, noise, stroke width variance,pulse width ratio, transient pixels density, entropy and variance of components to quantify the quality of a document image. The measures are intended to be used in any optical character recognition (OCR) engine to a priori estimate the expected performance of the OCR. The suggested measures have been evaluated on many document images, which have different scripts. The quality of a document image is manually annotated by users to create a ground truth. The idea is to correlate the values of the measures with the user annotated data. If the measure calculated matches the annotated description,then the metric is accepted; else it is rejected. In the set of metrics proposed, some of them are accepted and the rest are rejected. We have defined metrics that are easily estimatable. The metrics proposed in this paper are based on the feedback of homely grown OCR engines for Indic (Tamil and Kannada) languages. The metrics are independent of the scripts, and depend only on the quality and age of the paper and the printing. Experiments and results for each proposed metric are discussed. Actual recognition of the printed text is not performed to evaluate the proposed metrics. Sometimes, a document image containing broken characters results in good document image as per the evaluated metrics, which is part of the unsolved challenges. The proposed measures work on gray scale document images and fail to provide reliable information on binarized document image.