146 resultados para swd: Image segmentation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The demand for energy efficient, low weight structures has boosted the use of composite structures assembled using increased quantities of structural adhesives. Bonded structures may be subjected to severe working environments such as high temperature and moisture due to which the adhesive gets degraded over a period of time. This reduces the strength of a joint and leads to premature failure. Measurement of strains in the adhesive bondline at any point of time during service may be beneficial as an assessment can be made on the integrity of a joint and necessary preventive actions may be taken before failure. This paper presents an experimental approach of measuring peel and shear strains in the adhesive bondline of composite single-lap joints using digital image correlation. Different sets of composite adhesive joints with varied bond quality were prepared and subjected to tensile load during which digital images were taken and processed using digital image correlation software. The measured peel strain at the joint edge showed a rapid increase with the initiation of a crack till failure of the joint. The measured strains were used to compute the corresponding stresses assuming a plane strain condition and the results were compared with stresses predicted using theoretical models, namely linear and nonlinear adhesive beam models. A similar trend in stress distribution was observed. Further comparison of peel and shear strains also exhibited similar trend for both healthy and degraded joints. Maximum peel stress failure criterion was used to predict the failure load of a composite adhesive joint and a comparison was made between predicted and actual failure loads. The predicted failure loads from theoretical models were found to be higher than the actual failure load for all the joints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluorescence microscopy has become an indispensable tool in cell biology research due its exceptional specificity and ability to visualize subcellular structures with high contrast. It has highest impact when applied in 4D mode, i.e. when applied to record 3D image information as a function of time, since it allows the study of dynamic cellular processes in their native environment. The main issue in 4D fluorescence microscopy is that the phototoxic effect of fluorescence excitation gets accumulated during 4D image acquisition to the extent that normal cell functions are altered. Hence to avoid the alteration of normal cell functioning, it is required to minimize the excitation dose used for individual 2D images constituting a 4D image. Consequently, the noise level becomes very high degrading the resolution. According to the current status of technology, there is a minimum required excitation dose to ensure a resolution that is adequate for biological investigations. This minimum is sufficient to damage light-sensitive cells such as yeast if 4D imaging is performed for an extended period of time, for example, imaging for a complete cell cycle. Nevertheless, our recently developed deconvolution method resolves this conflict forming an enabling technology for visualization of dynamical processes of light-sensitive cells for durations longer than ever without perturbing normal cell functioning. The main goal of this article is to emphasize that there are still possibilities for enabling newer kinds of experiment in cell biology research involving even longer 4D imaging, by only improving deconvolution methods without any new optical technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sparse recovery methods utilize the l(p)-normbased regularization in the estimation problem with 0 <= p <= 1. These methods have a better utility when the number of independent measurements are limited in nature, which is a typical case for diffuse optical tomographic image reconstruction problem. These sparse recovery methods, along with an approximation to utilize the l(0)-norm, have been deployed for the reconstruction of diffuse optical images. Their performancewas compared systematically using both numerical and gelatin phantom cases to show that these methods hold promise in improving the reconstructed image quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate Nearest Neighbour Field maps are commonly used by computer vision and graphics community to deal with problems like image completion, retargetting, denoising, etc. In this paper, we extend the scope of usage of ANNF maps to medical image analysis, more specifically to optic disk detection in retinal images. In the analysis of retinal images, optic disk detection plays an important role since it simplifies the segmentation of optic disk and other retinal structures. The proposed approach uses FeatureMatch, an ANNF algorithm, to find the correspondence between a chosen optic disk reference image and any given query image. This correspondence provides a distribution of patches in the query image that are closest to patches in the reference image. The likelihood map obtained from the distribution of patches in query image is used for optic disk detection. The proposed approach is evaluated on five publicly available DIARETDB0, DIARETDB1, DRIVE, STARE and MESSIDOR databases, with total of 1540 images. We show, experimentally, that our proposed approach achieves an average detection accuracy of 99% and an average computation time of 0.2 s per image. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simulated boundary potential data for Electrical Impedance Tomography (EIT) are generated by a MATLAB based EIT data generator and the resistivity reconstruction is evaluated with Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software (EIDORS). Circular domains containing subdomains as inhomogeneity are defined in MATLAB-based EIT data generator and the boundary data are calculated by a constant current simulation with opposite current injection (OCI) method. The resistivity images reconstructed for different boundary data sets and images are analyzed with image parameters to evaluate the reconstruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magnetic Resonance Imaging (MRI) has been widely used in cancer treatment planning, which takes the advantage of high-resolution and high-contrast provided by it. The raw data collected in the MRI can also be used to obtain the temperature maps and has been explored for performing MR thermometry. This review article describes the methods that are used in performing MR thermometry, with an emphasis on reconstruction methods that are useful to obtain these temperature maps in real-time for large region of interest. This article also proposes a prior-image constrained reconstruction method for temperature reconstruction in MR thermometry, and a systematic comparison using ex-vivo tissue experiments with state of the art reconstruction method is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we have explored the prospect of segmenting crowd flow in H. 264 compressed videos by merely using motion vectors. The motion vectors are extracted by partially decoding the corresponding video sequence in the H. 264 compressed domain. The region of interest ie., crowd flow region is extracted and the motion vectors that spans the region of interest is preprocessed and a collective representation of the motion vectors for the entire video is obtained. The obtained motion vectors for the corresponding video is then clustered by using EM algorithm. Finally, the clusters which converges to a single flow are merged together based on the bhattacharya distance measure between the histogram of the of the orientation of the motion vectors at the boundaries of the clusters. We had implemented our proposed approach on the complex crowd flow dataset provided by 1] and compared our results by using Jaccard measure. Since we are performing crowd flow segmentation in the compressed domain using only motion vectors, our proposed approach performs much faster compared to other pixel domain counterparts still retaining better accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Representing images and videos in the form of compact codes has emerged as an important research interest in the vision community, in the context of web scale image/video search. Recently proposed Vector of Locally Aggregated Descriptors (VLAD), has been shown to outperform the existing retrieval techniques, while giving a desired compact representation. VLAD aggregates the local features of an image in the feature space. In this paper, we propose to represent the local features extracted from an image, as sparse codes over an over-complete dictionary, which is obtained by K-SVD based dictionary training algorithm. The proposed VLAD aggregates the residuals in the space of these sparse codes, to obtain a compact representation for the image. Experiments are performed over the `Holidays' database using SIFT features. The performance of the proposed method is compared with the original VLAD. The 4% increment in the mean average precision (mAP) indicates the better retrieval performance of the proposed sparse coding based VLAD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy). A comparative study of the proposed technique with the state-of-art maximum likelihood (ML) and maximum-a-posteriori (MAP) with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED. (C) 2015 Author(s).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a super resolution (SR) method for synthetic images using FeatureMatch. Existing state-of-the-art super resolution methods are learning based methods, where a pair of low-resolution and high-resolution dictionary pair are trained, and this trained pair is used to replace patches in low-resolution image with appropriate matching patches from the high-resolution dictionary. In this paper, we show that by using Approximate Nearest Neighbour Fields (ANNF), and a common source image, we can by-pass the learning phase, and use a single image for dictionary. Thus, reducing the dictionary from a collection obtained from hundreds of training images, to a single image. We show that by modifying the latest developments in ANNF computation, to suit super resolution, we can perform much faster and more accurate SR than existing techniques. To establish this claim we will compare our algorithm against various state-of-the-art algorithms, and show that we are able to achieve better and faster reconstruction without any training phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is similar to 200-fold faster (for large dataset) when compared to existing CPU based systems. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fringe tracking and fringe order assignment have become the central topics of current research in digital photoelasticity. Isotropic points (IPs) appearing in low fringe order zones are often either overlooked or entirely missed in conventional as well as digital photoelasticity. We aim to highlight image processing for characterizing IPs in an isochromatic fringe field. By resorting to a global analytical solution of a circular disk, sensitivity of IPs to small changes in far-field loading on the disk is highlighted. A local theory supplements the global closed-form solutions of three-, four-, and six-point loading configurations of circular disk. The local theoretical concepts developed in this paper are demonstrated through digital image analysis of isochromatics in circular disks subjected to three-and four-point loads. (C) 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)