168 resultados para Landsat satellite image


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the image reconstruction using the fan-beam filtered backprojection (FBP) algorithm with no backprojection weight from windowed linear prediction (WLP) completed truncated projection data. The image reconstruction from truncated projections aims to reconstruct the object accurately from the available limited projection data. Due to the incomplete projection data, the reconstructed image contains truncation artifacts which extends into the region of interest (ROI) making the reconstructed image unsuitable for further use. Data completion techniques have been shown to be effective in such situations. We use windowed linear prediction technique for projection completion and then use the fan-beam FBP algorithm with no backprojection weight for the 2-D image reconstruction. We evaluate the quality of the reconstructed image using fan-beam FBP algorithm with no backprojection weight after WLP completion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Editors' note:Flexible, large-area display and sensor arrays are finding growing applications in multimedia and future smart homes. This article first analyzes and compares current flexible devices, then discusses the implementation, requirements, and testing of flexible sensor arrays.—Jiun-Lang Huang (National Taiwan University) and Kwang-Ting (Tim) Cheng (University of California, Santa Barbara)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion of multi-sensor imaging data enables a synergetic interpretation of complementary information obtained by sensors of different spectral ranges. Multi-sensor data of diverse spectral, spatial and temporal resolutions require advanced numerical techniques for analysis and interpretation. This paper reviews ten advanced pixel based image fusion techniques – Component substitution (COS), Local mean and variance matching, Modified IHS (Intensity Hue Saturation), Fast Fourier Transformed-enhanced IHS, Laplacian Pyramid, Local regression, Smoothing filter (SF), Sparkle, SVHC and Synthetic Variable Ratio. The above techniques were tested on IKONOS data (Panchromatic band at 1 m spatial resolution and Multispectral 4 bands at 4 m spatial resolution). Evaluation of the fused results through various accuracy measures, revealed that SF and COS methods produce images closest to corresponding multi-sensor would observe at the highest resolution level (1 m).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Land cover (LC) and land use (LU) dynamics induced by human and natural processes play a major role in global as well as regional patterns of landscapes influencing biodiversity, hydrology, ecology and climate. Changes in LC features resulting in forest fragmentations have posed direct threats to biodiversity, endangering the sustainability of ecological goods and services. Habitat fragmentation is of added concern as the residual spatial patterns mitigate or exacerbate edge effects. LU dynamics are obtained by classifying temporal remotely sensed satellite imagery of different spatial and spectral resolutions. This paper reviews five different image classification algorithms using spatio-temporal data of a temperate watershed in Himachal Pradesh, India. Gaussian Maximum Likelihood classifier was found to be apt for analysing spatial pattern at regional scale based on accuracy assessment through error matrix and ROC (receiver operating characteristic) curves. The LU information thus derived was then used to assess spatial changes from temporal data using principal component analysis and correspondence analysis based image differencing. The forest area dynamics was further studied by analysing the different types of fragmentation through forest fragmentation models. The computed forest fragmentation and landscape metrics show a decline of interior intact forests with a substantial increase in patch forest during 1972-2007.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urbanisation has evinced interest from a wide section of the society including experts, amateurs, and novices. The multidisciplinary scope of the subject invokes the interest from ecologists, to urban planners and civil engineers, to sociologists, to administrators and policy makers, students and finally the common man. With the development and infrastructure initiatives mostly around the urban centres, the impacts of urbanisation and sprawl would be on the environment and the natural resources. The wisdom lies in how effectively we plan the urban growth without - hampering the environment, excessively harnessing the natural resources and eventually disturbing the natural set-up. The research on these help urban residents and policymakers make informed decisions and take action to restore these resources before they are lost. Ultimately the power to balance the urban ecosystems rests with regional awareness, policies, administration practices, management issues and operational problems. This publication on urban systems is aimed at helping scientists, policy makers, engineers, urban planners and ultimately the common man to visualise how towns and cities grow over a period of time based on investigations in the regions around the highway and cities. Two important highways in Karnataka, South India, viz., Bangalore - Mysore highway and the Mangalore - Udupi highway, in Karnataka and the Tiruchirapalli - Tanjavore - Kumbakonam triangular road network in Tamil Nadu, South India, were considered in this investigation. Geographic Information System and Remote Sensing data were used to analyse the pattern of urbanisation. This was coupled with the spatial and temporal data from the Survey of India toposheets (for 1972), satellite imageries procured from National Remote Sensing Agency (NRSA) (LANDSAT TM for 1987 and IRS LISS III for 1999), demographic details from the Census of India (1971, 1981, 1991 and 2001) and the village maps from the Directorate of Survey Settlements and Land Records, Government of Karnataka. All this enabled in quantifying the increase in the built-up area for nearly three decades. With intent of identifying the potential sprawl zones, this could be modelled and projected for the future decades. Apart from these the study could quantify some of the metrics that could be used in the study of urban sprawl.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image segmentation is formulated as a stochastic process whose invariant distribution is concentrated at points of the desired region. By choosing multiple seed points, different regions can be segmented. The algorithm is based on the theory of time-homogeneous Markov chains and has been largely motivated by the technique of simulated annealing. The method proposed here has been found to perform well on real-world clean as well as noisy images while being computationally far less expensive than stochastic optimisation techniques

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional encryption techniques are usually applicable for text data and often unsuited for encrypting multimedia objects for two reasons. Firstly, the huge sizes associated with multimedia objects make conventional encryption computationally costly. Secondly, multimedia objects come with massive redundancies which are useful in avoiding encryption of the objects in their entirety. Hence a class of encryption techniques devoted to encrypting multimedia objects like images have been developed. These techniques make use of the fact that the data comprising multimedia objects like images could in general be seggregated into two disjoint components, namely salient and non-salient. While the former component contributes to the perceptual quality of the object, the latter only adds minor details to it. In the context of images, the salient component is often much smaller in size than the non-salient component. Encryption effort is considerably reduced if only the salient component is encrypted while leaving the other component unencrypted. A key challenge is to find means to achieve a desirable seggregation so that the unencrypted component does not reveal any information about the object itself. In this study, an image encryption approach that uses fractal structures known as space-filling curves- in order to reduce the encryption overload is presented. In addition, the approach also enables a high quality lossy compression of images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preferential accumulation and agglomeration kinetics of nanoparticles suspended in an acoustically levitated water droplet under radiative heating has been studied. Particle image velocimetry performed to map the internal flow field shows a single cell recirculation with increasing strength for decreasing viscosities. Infrared thermography and high speed imaging show details of the heating process for various concentrations of nanosilica droplets. Initial stage of heating is marked by fast vaporization of liquid and sharp temperature rise. Following this stage, aggregation of nanoparticles is seen resulting in various structure formations. At low concentrations, a bowl structure of the droplet is dominant, maintained at a constant temperature. At high concentrations, viscosity of the solution increases, leading to rotation about the levitator axis due to the dominance of centrifugal motion. Such complex fluid motion inside the droplet due to acoustic streaming eventually results in the formation of a ring structure. This horizontal ring eventually reorients itself due to an imbalance of acoustic forces on the ring, exposing larger area for laser absorption and subsequent sharp temperature rise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing approches to digital halftoning of image are based primarily on thresholding. We propose a general framework fot image halftoning whcrc some function uf the output halftone tracks another function of the input gray-tone.This appcoach is shown lo unify most existing algorithms and to provide useful insights. Further, the new intcrpretation allows us to remedy problems in existing aigorithrms such as the error dlffusion, and sohsequently to achieve halftones haavmg superior quality. The proposed method is very general nature is an advantage since it offers a wide choice of three Cilters and a update rule. An intercstmg product of this framework is that equally good, or better, half-tones are possible ro be obtained by thresholding a noise proccess instead of the image itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffuse optical tomography (DOT) is one of the ways to probe highly scattering media such as tissue using low-energy near infra-red light (NIR) to reconstruct a map of the optical property distribution. The interaction of the photons in biological tissue is a non-linear process and the phton transport through the tissue is modelled using diffusion theory. The inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. The solution of the non-linear problem can be improved by modeling and optimizing the cost functional. The cost functional is f(x) = x(T)Ax - b(T)x + c and after minimization, the cost functional reduces to Ax = b. The spatial distribution of optical parameter can be obtained by solving the above equation iteratively for x. As the problem is non-linear, ill-posed and ill-conditioned, there will be an error or correction term for x at each iteration. A linearization strategy is proposed for the solution of the nonlinear ill-posed inverse problem by linear combination of system matrix and error in solution. By propagating the error (e) information (obtained from previous iteration) to the minimization function f(x), we can rewrite the minimization function as f(x; e) = (x + e)(T) A(x + e) - b(T)(x + e) + c. The revised cost functional is f(x; e) = f(x) + e(T)Ae. The self guided spatial weighted prior (e(T)Ae) error (e, error in estimating x) information along the principal nodes facilitates a well resolved dominant solution over the region of interest. The local minimization reduces the spreading of inclusion and removes the side lobes, thereby improving the contrast, localization and resolution of reconstructed image which has not been possible with conventional linear and regularization algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real-time image reconstruction is essential for improving the temporal resolution of fluorescence microscopy. A number of unavoidable processes such as, optical aberration, noise and scattering degrade image quality, thereby making image reconstruction an ill-posed problem. Maximum likelihood is an attractive technique for data reconstruction especially when the problem is ill-posed. Iterative nature of the maximum likelihood technique eludes real-time imaging. Here we propose and demonstrate a compute unified device architecture (CUDA) based fast computing engine for real-time 3D fluorescence imaging. A maximum performance boost of 210x is reported. Easy availability of powerful computing engines is a boon and may accelerate to realize real-time 3D fluorescence imaging. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. http://dx.doi.org/10.1063/1.4754604]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel approach that can more effectively use the structural information provided by the traditional imaging modalities in multimodal diffuse optical tomographic imaging is introduced. This approach is based on a prior image-constrained-l(1) minimization scheme and has been motivated by the recent progress in the sparse image reconstruction techniques. It is shown that the proposed framework is more effective in terms of localizing the tumor region and recovering the optical property values both in numerical and gelatin phantom cases compared to the traditional methods that use structural information. (C) 2012 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.