30 resultados para Multi-resolution segmentation

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bangalore is experiencing unprecedented urbanisation in recent times due to concentrated developmental activities with impetus on IT (Information Technology) and BT (Biotechnology) sectors. The concentrated developmental activities has resulted in the increase in population and consequent pressure on infrastructure, natural resources, ultimately giving rise to a plethora of serious challenges such as urban flooding, climate change, etc. One of the perceived impact at local levels is the increase in sensible heat flux from the land surface to the atmosphere, which is also referred as heat island effect. In this communication, we report the changes in land surface temperature (LST) with respect to land cover changes during 1973 to 2007. A novel technique combining the information from sub-pixel class proportions with information from classified image (using signatures of the respective classes collected from the ground) has been used to achieve more reliable classification. The analysis showed positive correlation with the increase in paved surfaces and LST. 466% increase in paved surfaces (buildings, roads, etc.) has lead to the increase in LST by about 2 ºC during the last 2 decades, confirming urban heat island phenomenon. LSTs’ were relatively lower (~ 4 to 7 ºC) at land uses such as vegetation (parks/forests) and water bodies which act as heat sinks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urban population is growing at around 2.3 percent per annum in India. This is leading to urbanisation and often fuelling the dispersed development in the outskirts of urban and village centres with impacts such as loss of agricultural land, open space, and ecologically sensitive habitats. This type of upsurge is very much prevalent and persistent in most places, often inferred as sprawl. The direct implication of such urban sprawl is the change in land use and land cover of the region and lack of basic amenities, since planners are unable to visualise this type of growth patterns. This growth is normally left out in all government surveys (even in national population census), as this cannot be grouped under either urban or rural centre. The investigation of patterns of growth is very crucial from regional planning point of view to provide basic amenities in the region. The growth patterns of urban sprawl can be analysed and understood with the availability of temporal multi-sensor, multi-resolution spatial data. In order to optimise these spectral and spatial resolutions, image fusion techniques are required. This aids in integrating a lower spatial resolution multispectral (MSS) image (for example, IKONOS MSS bands of 4m spatial resolution) with a higher spatial resolution panchromatic (PAN) image (IKONOS PAN band of 1m spatial resolution) based on a simple spectral preservation fusion technique - the Smoothing Filter-based Intensity Modulation (SFIM). Spatial details are modulated to a co-registered lower resolution MSS image without altering its spectral properties and contrast by using a ratio between a higher resolution image and its low pass filtered (smoothing filter) image. The visual evaluation and statistical analysis confirms that SFIM is a superior fusion technique for improving spatial detail of MSS images with the preservation of spectral properties.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6 `' angular resolution and 72 mu Jy beam(-1) rms noise. The images (centered at R. A. 00(h)35(m)00(s), decl. -67 degrees 00'00 `' and R. A. 00(h)59(m)17(s), decl. -67.00'00 `', J2000 epoch) cover 8.42 deg(2) sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection threshold was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50 `'. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ductility based design of reinforced concrete structures implicitly assumes certain damage under the action of a design basis earthquake. The damage undergone by a structure needs to be quantified, so as to assess the post-seismic reparability and functionality of the structure. The paper presents an analytical method of quantification and location of seismic damage, through system identification methods. It may be noted that soft ground storied buildings are the major casualties in any earthquake and hence the example structure is a soft or weak first storied one, whose seismic response and temporal variation of damage are computed using a non-linear dynamic analysis program (IDARC) and compared with a normal structure. Time period based damage identification model is used and suitably calibrated with classic damage models. Regenerated stiffness of the three degrees of freedom model (for the three storied frame) is used to locate the damage, both on-line as well as after the seismic event. Multi resolution analysis using wavelets is also used for localized damage identification for soft storey columns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Effective conservation and management of natural resources requires up-to-date information of the land cover (LC) types and their dynamics. The LC dynamics are being captured using multi-resolution remote sensing (RS) data with appropriate classification strategies. RS data with important environmental layers (either remotely acquired or derived from ground measurements) would however be more effective in addressing LC dynamics and associated changes. These ancillary layers provide additional information for delineating LC classes' decision boundaries compared to the conventional classification techniques. This communication ascertains the possibility of improved classification accuracy of RS data with ancillary and derived geographical layers such as vegetation index, temperature, digital elevation model (DEM), aspect, slope and texture. This has been implemented in three terrains of varying topography. The study would help in the selection of appropriate ancillary data depending on the terrain for better classified information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation for extracting water-covered regions. Analysis of MODIS satellite images is applied in three stages: before flood, during flood and after flood. Water regions are extracted from the MODIS images using image classification (based on spectral information) and image segmentation (based on spatial information). Multi-temporal MODIS images from ``normal'' (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification (SVM and ANN) and region-based image segmentation is an accurate and reliable approach for the extraction of water-covered regions. (c) 2012 COSPAR. Published by Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Super-resolution microscopy has tremendously progressed our understanding of cellular biophysics and biochemistry. Specifically, 4pi fluorescence microscopy technique stands out because of its axial super-resolution capability. All types of 4pi-microscopy techniques work well in conjugation with deconvolution techniques to get rid of artifacts due to side-lobes. In this regard, we propose a technique based on spatial filter in a 4pi-type-C confocal setup to get rid of these artifacts. Using a special spatial filter, we have reduced the depth-of-focus. Interference of two similar depth-of-focus beams in a 4 pi geometry result in substantial reduction of side-lobes. Studies show a reduction of side-lobes by 46% and 76% for single and two photon variant compared to 4pi - type - C confocal system. This is incredible considering the resolving capability of the existing 4pi - type - C confocal microscopy. Moreover, the main lobe is found to be 150 nm for the proposed spatial filtering technique as compared to 690 nm of the state-of-art confocal system. Reconstruction of experimentally obtained 2PE - 4pi data of green fluorescent protein (GFP)-tagged mitocondrial network shows near elimination of artifacts arising out of side-lobes. Proposed technique may find interesting application in fluorescence microscopy, nano-lithography, and cell biology. (C) 2013 AIP Publishing LLC.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses an approach for river mapping and flood evaluation to aid multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation to extract water covered region. Analysis of Moderate Resolution Imaging Spectroradiometer (MODIS) satellite images is applied in two stages: before flood and during flood. For these images the extraction of water region utilizes spectral information for image classification and spatial information for image segmentation. Multi-temporal MODIS images from ``normal'' (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as artificial neural networks and gene expression programming to separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water region. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification and region-based segmentation is an accurate and reliable for the extraction of water-covered region.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Up to now, high-resolution mapping of surface water extent from satellites has only been available for a few regions, over limited time periods. The extension of the temporal and spatial coverage was difficult, due to the limitation of the remote sensing technique e.g., the interaction of the radiation with vegetation or cloud for visible observations or the temporal sampling with the synthetic aperture radar (SAR)]. The advantages and the limitations of the various satellite techniques are reviewed. The need to have a global and consistent estimate of the water surfaces over long time periods triggered the development of a multi-satellite methodology to obtain consistent surface water all over the globe, regardless of the environments. The Global Inundation Extent from Multi-satellites (GIEMS) combines the complementary strengths of satellite observations from the visible to the microwave, to produce a low-resolution monthly dataset () of surface water extent and dynamics. Downscaling algorithms are now developed and applied to GIEMS, using high-spatial-resolution information from visible, near-infrared, and synthetic aperture radar (SAR) satellite images, or from digital elevation models. Preliminary products are available down to 500-m spatial resolution. This work bridges the gaps and prepares for the future NASA/CNES Surface Water Ocean Topography (SWOT) mission to be launched in 2020. SWOT will delineate surface water extent estimates and their water storage with an unprecedented spatial resolution and accuracy, thanks to a SAR in an interferometry mode. When available, the SWOT data will be adopted to downscale GIEMS, to produce a long time series of water surfaces at global scale, consistent with the SWOT observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The number of available structures of large multi-protein assemblies is quite small. Such structures provide phenomenal insights on the organization, mechanism of formation and functional properties of the assembly. Hence detailed analysis of such structures is highly rewarding. However, the common problem in such analyses is the low resolution of these structures. In the recent times a number of attempts that combine low resolution cryo-EM data with higher resolution structures determined using X-ray analysis or NMR or generated using comparative modeling have been reported. Even in such attempts the best result one arrives at is the very course idea about the assembly structure in terms of trace of the C alpha atoms which are modeled with modest accuracy. Methodology/Principal Findings: In this paper first we present an objective approach to identify potentially solvent exposed and buried residues solely from the position of C alpha atoms and amino acid sequence using residue type-dependent thresholds for accessible surface areas of C alpha. We extend the method further to recognize potential protein-protein interface residues. Conclusion/Significance: Our approach to identify buried and exposed residues solely from the positions of C alpha atoms resulted in an accuracy of 84%, sensitivity of 83-89% and specificity of 67-94% while recognition of interfacial residues corresponded to an accuracy of 94%, sensitivity of 70-96% and specificity of 58-94%. Interestingly, detailed analysis of cases of mismatch between recognition of interface residues from C alpha positions and all-atom models suggested that, recognition of interfacial residues using C alpha atoms only correspond better with intuitive notion of what is an interfacial residue. Our method should be useful in the objective analysis of structures of protein assemblies when positions of only C alpha positions are available as, for example, in the cases of integration of cryo-EM data and high resolution structures of the components of the assembly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Run-time interoperability between different applications based on H.264/AVC is an emerging need in networked infotainment, where media delivery must match the desired resolution and quality of the end terminals. In this paper, we describe the architecture and design of a polymorphic ASIC to support this. The H.264 decoding flow is partitioned into modules, such that the polymorphic ASIC meets the design goals of low-power, low-area, high flexibility, high throughput and fast interoperability between different profiles and levels of H.264. We demonstrate the idea with a multi-mode decoder that can decode baseline, main and high profile H.264 streams and can interoperate at run.time across these profiles. The decoder is capable of processing frame sizes of up to 1024 times 768 at 30 fps. The design synthesized with UMC 0.13 mum technology, occupies 250 k gates and runs at 100 MHz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The function of a protein in a cell often involves coordinated interactions with one or several regulatory partners. It is thus imperative to characterize a protein both in isolation as well as in the context of its complex with an interacting partner. High resolution structural information determined by X-ray crystallography and Nuclear Magnetic Resonance offer the best route to characterize protein complexes. These techniques, however, require highly purified and homogenous protein samples at high concentration. This requirement often presents a major hurdle for structural studies. Here we present a strategy based on co-expression and co-purification to obtain recombinant multi-protein complexes in the quantity and concentration range that can enable hitherto intractable structural projects. The feasibility of this strategy was examined using the sigma factor/anti-sigma factor protein complexes from Mycobacterium tuberculosis. The approach was successful across a wide range of sigma factors and their cognate interacting partners. It thus appears likely that the analysis of these complexes based on variations in expression constructs and procedures for the purification and characterization of these recombinant protein samples would be widely applicable for other multi-protein systems. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High performance video standards use prediction techniques to achieve high picture quality at low bit rates. The type of prediction decides the bit rates and the image quality. Intra Prediction achieves high video quality with significant reduction in bit rate. This paper present an area optimized architecture for Intra prediction, for H.264 decoding at HDTV resolution with a target of achieving 60 fps. The architecture was validated on Virtex-5 FPGA based platform. The architecture achieves a frame rate of 64 fps. The architecture is based on multi-level memory hierarchy to reduce latency and ensure optimum resources utilization. It removes redundancy by reusing same functional blocks across different modes. The proposed architecture uses only 13% of the total LUTs available on the Xilinx FPGA XC5VLX50T.