976 resultados para image set


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new multi-sensor image registration technique is proposed based on detecting the feature corner points using modified Harris Corner Detector (HDC). These feature points are matched using multi-objective optimization (distance condition and angle criterion) based on Discrete Particle Swarm Optimization (DPSO). This optimization process is more efficient as it considers both the distance and angle criteria to incorporate multi-objective switching in the fitness function. This optimization process helps in picking up three corresponding corner points detected in the sensed and base image and thereby using the affine transformation, the sensed image is aligned with the base image. Further, the results show that the new approach can provide a new dimension in solving multi-sensor image registration problems. From the obtained results, the performance of image registration is evaluated and is concluded that the proposed approach is efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mode I fracture toughness of concrete can be experimentally determined using three point bend beam in conjunction with digital image correlation (DIC). Three different geometrically similar sizes of beams are cast for this study. To study the influence of fly ash and silica fume on fracture toughness of SCC, three SCC mixes are prepared with and without mineral additions. The scanning electron microscope (SEM) images are taken on the fractured surface to add information on fracture process in SCC. From this study, it is concluded that the fracture toughness of SCC with mineral addition is higher when compared to those without mineral addition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we describe a method for feature extraction and classification of characters manually isolated from scene or natural images. Characters in a scene image may be affected by low resolution, uneven illumination or occlusion. We propose a novel method to perform binarization on gray scale images by minimizing energy functional. Discrete Cosine Transform and Angular Radial Transform are used to extract the features from characters after normalization for scale and translation. We have evaluated our method on the complete test set of Chars74k dataset for English and Kannada scripts consisting of handwritten and synthesized characters, as well as characters extracted from camera captured images. We utilize only synthesized and handwritten characters from this dataset as training set. Nearest neighbor classification is used in our experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have benchmarked the maximum obtainable recognition accuracy on five publicly available standard word image data sets using semi-automated segmentation and a commercial OCR. These images have been cropped from camera captured scene images, born digital images (BDI) and street view images. Using the Matlab based tool developed by us, we have annotated at the pixel level more than 3600 word images from the five data sets. The word images binarized by the tool, as well as by our own midline analysis and propagation of segmentation (MAPS) algorithm are recognized using the trial version of Nuance Omnipage OCR and these two results are compared with the best reported in the literature. The benchmark word recognition rates obtained on ICDAR 2003, Sign evaluation, Street view, Born-digital and ICDAR 2011 data sets are 83.9%, 89.3%, 79.6%, 88.5% and 86.7%, respectively. The results obtained from MAPS binarized word images without the use of any lexicon are 64.5% and 71.7% for ICDAR 2003 and 2011 respectively, and these values are higher than the best reported values in the literature of 61.1% and 41.2%, respectively. MAPS results of 82.8% for BDI 2011 dataset matches the performance of the state of the art method based on power law transform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new technique is proposed for multisensor image registration by matching the features using discrete particle swarm optimization (DPSO). The feature points are first extracted from the reference and sensed image using improved Harris corner detector available in the literature. From the extracted corner points, DPSO finds the three corresponding points in the sensed and reference images using multiobjective optimization of distance and angle conditions through objective switching technique. By this, the global best matched points are obtained which are used to evaluate the affine transformation for the sensed image. The performance of the image registration is evaluated and concluded that the proposed approach is efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Typical image-guided diffuse optical tomographic image reconstruction procedures involve reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem less ill-posed compared to traditional underdetermined cases. Still, the methods that are deployed in this case are same as those used for traditional diffuse optical image reconstruction, which involves a regularization term as well as computation of the Jacobian. A gradient-free Nelder-Mead simplex method is proposed here to perform the image reconstruction procedure and is shown to provide solutions that closely match ones obtained using established methods, even in highly noisy data. The proposed method also has the distinct advantage of being more efficient owing to being regularization free, involving only repeated forward calculations. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The contour tree is a topological abstraction of a scalar field that captures evolution in level set connectivity. It is an effective representation for visual exploration and analysis of scientific data. We describe a work-efficient, output sensitive, and scalable parallel algorithm for computing the contour tree of a scalar field defined on a domain that is represented using either an unstructured mesh or a structured grid. A hybrid implementation of the algorithm using the GPU and multi-core CPU can compute the contour tree of an input containing 16 million vertices in less than ten seconds with a speedup factor of upto 13. Experiments based on an implementation in a multi-core CPU environment show near-linear speedup for large data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose and experimentally demonstrate a three-dimensional (3D) image reconstruction methodology based on Taylor series approximation (TSA) in a Bayesian image reconstruction formulation. TSA incorporates the requirement of analyticity in the image domain, and acts as a finite impulse response filter. This technique is validated on images obtained from widefield, confocal laser scanning fluorescence microscopy and two-photon excited 4pi (2PE-4pi) fluorescence microscopy. Studies on simulated 3D objects, mitochondria-tagged yeast cells (labeled with Mitotracker Orange) and mitochondrial networks (tagged with Green fluorescent protein) show a signal-to-background improvement of 40% and resolution enhancement from 360 to 240 nm. This technique can easily be extended to other imaging modalities (single plane illumination microscopy (SPIM), individual molecule localization SPIM, stimulated emission depletion microscopy and its variants).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we synthesized bulk amorphous GeGaS glass by conventional melt quenching technique. Amorphous nature of the glass is confirmed using X-ray diffraction. We fabricated the channel waveguides on this glass using the ultrafast laser inscription technique. The waveguides are written on this glass 100 mu m below the surface of the glass with a separation of 50 ae m by focusing the laser beam into the material using 0.67 NA lens. The laser parameters are set to 350 fs pulse duration at 100 KHz repetition rate. A range of writing energies with translation speeds 1 mm/s, 2 mm/s, 3 mm/s and 4 mm/s were investigated. After fabrication the waveguides facets were ground and polished to the optical quality to remove any tapering of the waveguide close to the edges. We characterized the loss measurement by butt coupling method and the mode field image of the waveguides has been captured to compare with the mode field image of fibers. Also we compared the asymmetry in the shape of the waveguide and its photo structural change using Raman spectra.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large software systems are developed by composing multiple programs. If the programs manip-ulate and exchange complex data, such as network packets or files, it is essential to establish that they follow compatible data formats. Most of the complexity of data formats is associated with the headers. In this paper, we address compatibility of programs operating over headers of network packets, files, images, etc. As format specifications are rarely available, we infer the format associated with headers by a program as a set of guarded layouts. In terms of these formats, we define and check compatibility of (a) producer-consumer programs and (b) different versions of producer (or consumer) programs. A compatible producer-consumer pair is free of type mismatches and logical incompatibilities such as the consumer rejecting valid outputs gen-erated by the producer. A backward compatible producer (resp. consumer) is guaranteed to be compatible with consumers (resp. producers) that were compatible with its older version. With our prototype tool, we identified 5 known bugs and 1 potential bug in (a) sender-receiver modules of Linux network drivers of 3 vendors and (b) different versions of a TIFF image library.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 mu m. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 mu m mark. (C) 2013 AIP Publishing LLC.