957 resultados para swd: Image segmentation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The assembly of aerospace and automotive structures in recent years is increasingly carried out using adhesives. Adhesive joints have advantages of uniform stress distribution and less stress concentration in the bonded region. Nevertheless, they may suffer due to the presence of defects in bond line and at the interface or due to improper curing process. While defects like voids, cracks and delaminations present in the adhesive bond line may be detected using different NDE methods, interfacial defects in the form of kissing bond may go undetected. Attempts using advanced ultrasonic methods like nonlinear ultrasound and guided wave inspection to detect kissing bond have met with limited success stressing the need for alternate methods. This paper concerns the preliminary studies carried out on detectability of dry contact kissing bonds in adhesive joints using the Digital Image Correlation (DIC) technique. In this attempt, adhesive joint samples containing varied area of kissing bond were prepared using the glass fiber reinforced composite (GFRP) as substrates and epoxy resin as the adhesive layer joining them. The samples were also subjected to conventional and high power ultrasonic inspection. Further, these samples were loaded till failure to determine the bond strength during which digital images were recorded and analyzed using the DIC method. This noncontact method could indicate the existence of kissing bonds at less than 50% failure load. Finite element studies carried out showed a similar trend. Results obtained from these preliminary studies are encouraging and further tests need to be done on a larger set of samples to study experimental uncertainties and scatter associated with the method. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Text segmentation and localization algorithms are proposed for the born-digital image dataset. Binarization and edge detection are separately carried out on the three colour planes of the image. Connected components (CC's) obtained from the binarized image are thresholded based on their area and aspect ratio. CC's which contain sufficient edge pixels are retained. A novel approach is presented, where the text components are represented as nodes of a graph. Nodes correspond to the centroids of the individual CC's. Long edges are broken from the minimum spanning tree of the graph. Pair wise height ratio is also used to remove likely non-text components. A new minimum spanning tree is created from the remaining nodes. Horizontal grouping is performed on the CC's to generate bounding boxes of text strings. Overlapping bounding boxes are removed using an overlap area threshold. Non-overlapping and minimally overlapping bounding boxes are used for text segmentation. Vertical splitting is applied to generate bounding boxes at the word level. The proposed method is applied on all the images of the test dataset and values of precision, recall and H-mean are obtained using different approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a segmentation algorithm to extract foreground object motion in a moving camera scenario without any preprocessing step such as tracking selected features, video alignment, or foreground segmentation. By viewing it as a curve fitting problem on advected particle trajectories, we use RANSAC to find the polynomial that best fits the camera motion and identify all trajectories that correspond to the camera motion. The remaining trajectories are those due to the foreground motion. By using the superposition principle, we subtract the motion due to camera from foreground trajectories and obtain the true object-induced trajectories. We show that our method performs on par with state-of-the-art technique, with an execution time speed-up of 10x-40x. We compare the results on real-world datasets such as UCF-ARG, UCF Sports and Liris-HARL. We further show that it can be used toper-form video alignment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new technique is proposed for multisensor image registration by matching the features using discrete particle swarm optimization (DPSO). The feature points are first extracted from the reference and sensed image using improved Harris corner detector available in the literature. From the extracted corner points, DPSO finds the three corresponding points in the sensed and reference images using multiobjective optimization of distance and angle conditions through objective switching technique. By this, the global best matched points are obtained which are used to evaluate the affine transformation for the sensed image. The performance of the image registration is evaluated and concluded that the proposed approach is efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Latent variable methods, such as PLCA (Probabilistic Latent Component Analysis) have been successfully used for analysis of non-negative signal representations. In this paper, we formulate PLCS (Probabilistic Latent Component Segmentation), which models each time frame of a spectrogram as a spectral distribution. Given the signal spectrogram, the segmentation boundaries are estimated using a maximum-likelihood approach. For an efficient solution, the algorithm imposes a hard constraint that each segment is modelled by a single latent component. The hard constraint facilitates the solution of ML boundary estimation using dynamic programming. The PLCS framework does not impose a parametric assumption unlike earlier ML segmentation techniques. PLCS can be naturally extended to model coarticulation between successive phones. Experiments on the TIMIT corpus show that the proposed technique is promising compared to most state of the art speech segmentation algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Typical image-guided diffuse optical tomographic image reconstruction procedures involve reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem less ill-posed compared to traditional underdetermined cases. Still, the methods that are deployed in this case are same as those used for traditional diffuse optical image reconstruction, which involves a regularization term as well as computation of the Jacobian. A gradient-free Nelder-Mead simplex method is proposed here to perform the image reconstruction procedure and is shown to provide solutions that closely match ones obtained using established methods, even in highly noisy data. The proposed method also has the distinct advantage of being more efficient owing to being regularization free, involving only repeated forward calculations. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to reduce the motion artifacts in DSA, non-rigid image registration is commonly used before subtracting the mask from the contrast image. Since DSA registration requires a set of spatially non-uniform control points, a conventional MRF model is not very efficient. In this paper, we introduce the concept of pivotal and non-pivotal control points to address this, and propose a non-uniform MRF for DSA registration. We use quad-trees in a novel way to generate the non-uniform grid of control points. Our MRF formulation produces a smooth displacement field and therefore results in better artifact reduction than that of registering the control points independently. We achieve improved computational performance using pivotal control points without compromising on the artifact reduction. We have tested our approach using several clinical data sets, and have presented the results of quantitative analysis, clinical assessment and performance improvement on a GPU. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose and experimentally demonstrate a three-dimensional (3D) image reconstruction methodology based on Taylor series approximation (TSA) in a Bayesian image reconstruction formulation. TSA incorporates the requirement of analyticity in the image domain, and acts as a finite impulse response filter. This technique is validated on images obtained from widefield, confocal laser scanning fluorescence microscopy and two-photon excited 4pi (2PE-4pi) fluorescence microscopy. Studies on simulated 3D objects, mitochondria-tagged yeast cells (labeled with Mitotracker Orange) and mitochondrial networks (tagged with Green fluorescent protein) show a signal-to-background improvement of 40% and resolution enhancement from 360 to 240 nm. This technique can easily be extended to other imaging modalities (single plane illumination microscopy (SPIM), individual molecule localization SPIM, stimulated emission depletion microscopy and its variants).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 mu m. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 mu m mark. (C) 2013 AIP Publishing LLC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The demand for energy efficient, low weight structures has boosted the use of composite structures assembled using increased quantities of structural adhesives. Bonded structures may be subjected to severe working environments such as high temperature and moisture due to which the adhesive gets degraded over a period of time. This reduces the strength of a joint and leads to premature failure. Measurement of strains in the adhesive bondline at any point of time during service may be beneficial as an assessment can be made on the integrity of a joint and necessary preventive actions may be taken before failure. This paper presents an experimental approach of measuring peel and shear strains in the adhesive bondline of composite single-lap joints using digital image correlation. Different sets of composite adhesive joints with varied bond quality were prepared and subjected to tensile load during which digital images were taken and processed using digital image correlation software. The measured peel strain at the joint edge showed a rapid increase with the initiation of a crack till failure of the joint. The measured strains were used to compute the corresponding stresses assuming a plane strain condition and the results were compared with stresses predicted using theoretical models, namely linear and nonlinear adhesive beam models. A similar trend in stress distribution was observed. Further comparison of peel and shear strains also exhibited similar trend for both healthy and degraded joints. Maximum peel stress failure criterion was used to predict the failure load of a composite adhesive joint and a comparison was made between predicted and actual failure loads. The predicted failure loads from theoretical models were found to be higher than the actual failure load for all the joints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluorescence microscopy has become an indispensable tool in cell biology research due its exceptional specificity and ability to visualize subcellular structures with high contrast. It has highest impact when applied in 4D mode, i.e. when applied to record 3D image information as a function of time, since it allows the study of dynamic cellular processes in their native environment. The main issue in 4D fluorescence microscopy is that the phototoxic effect of fluorescence excitation gets accumulated during 4D image acquisition to the extent that normal cell functions are altered. Hence to avoid the alteration of normal cell functioning, it is required to minimize the excitation dose used for individual 2D images constituting a 4D image. Consequently, the noise level becomes very high degrading the resolution. According to the current status of technology, there is a minimum required excitation dose to ensure a resolution that is adequate for biological investigations. This minimum is sufficient to damage light-sensitive cells such as yeast if 4D imaging is performed for an extended period of time, for example, imaging for a complete cell cycle. Nevertheless, our recently developed deconvolution method resolves this conflict forming an enabling technology for visualization of dynamical processes of light-sensitive cells for durations longer than ever without perturbing normal cell functioning. The main goal of this article is to emphasize that there are still possibilities for enabling newer kinds of experiment in cell biology research involving even longer 4D imaging, by only improving deconvolution methods without any new optical technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sparse recovery methods utilize the l(p)-normbased regularization in the estimation problem with 0 <= p <= 1. These methods have a better utility when the number of independent measurements are limited in nature, which is a typical case for diffuse optical tomographic image reconstruction problem. These sparse recovery methods, along with an approximation to utilize the l(0)-norm, have been deployed for the reconstruction of diffuse optical images. Their performancewas compared systematically using both numerical and gelatin phantom cases to show that these methods hold promise in improving the reconstructed image quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate Nearest Neighbour Field maps are commonly used by computer vision and graphics community to deal with problems like image completion, retargetting, denoising, etc. In this paper, we extend the scope of usage of ANNF maps to medical image analysis, more specifically to optic disk detection in retinal images. In the analysis of retinal images, optic disk detection plays an important role since it simplifies the segmentation of optic disk and other retinal structures. The proposed approach uses FeatureMatch, an ANNF algorithm, to find the correspondence between a chosen optic disk reference image and any given query image. This correspondence provides a distribution of patches in the query image that are closest to patches in the reference image. The likelihood map obtained from the distribution of patches in query image is used for optic disk detection. The proposed approach is evaluated on five publicly available DIARETDB0, DIARETDB1, DRIVE, STARE and MESSIDOR databases, with total of 1540 images. We show, experimentally, that our proposed approach achieves an average detection accuracy of 99% and an average computation time of 0.2 s per image. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simulated boundary potential data for Electrical Impedance Tomography (EIT) are generated by a MATLAB based EIT data generator and the resistivity reconstruction is evaluated with Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software (EIDORS). Circular domains containing subdomains as inhomogeneity are defined in MATLAB-based EIT data generator and the boundary data are calculated by a constant current simulation with opposite current injection (OCI) method. The resistivity images reconstructed for different boundary data sets and images are analyzed with image parameters to evaluate the reconstruction.