216 resultados para Image classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose and experimentally demonstrate a three-dimensional (3D) image reconstruction methodology based on Taylor series approximation (TSA) in a Bayesian image reconstruction formulation. TSA incorporates the requirement of analyticity in the image domain, and acts as a finite impulse response filter. This technique is validated on images obtained from widefield, confocal laser scanning fluorescence microscopy and two-photon excited 4pi (2PE-4pi) fluorescence microscopy. Studies on simulated 3D objects, mitochondria-tagged yeast cells (labeled with Mitotracker Orange) and mitochondrial networks (tagged with Green fluorescent protein) show a signal-to-background improvement of 40% and resolution enhancement from 360 to 240 nm. This technique can easily be extended to other imaging modalities (single plane illumination microscopy (SPIM), individual molecule localization SPIM, stimulated emission depletion microscopy and its variants).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents classification, representation and extraction of deformation features in sheet-metal parts. The thickness is constant for these shape features and hence these are also referred to as constant thickness features. The deformation feature is represented as a set of faces with a characteristic arrangement among the faces. Deformation of the base-sheet or forming of material creates Bends and Walls with respect to a base-sheet or a reference plane. These are referred to as Basic Deformation Features (BDFs). Compound deformation features having two or more BDFs are defined as characteristic combinations of Bends and Walls and represented as a graph called Basic Deformation Features Graph (BDFG). The graph, therefore, represents a compound deformation feature uniquely. The characteristic arrangement of the faces and type of bends belonging to the feature decide the type and nature of the deformation feature. Algorithms have been developed to extract and identify deformation features from a CAD model of sheet-metal parts. The proposed algorithm does not require folding and unfolding of the part as intermediate steps to recognize deformation features. Representations of typical features are illustrated and results of extracting these deformation features from typical sheet metal parts are presented and discussed. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Myopathies are muscular diseases in which muscle fibers degenerate due to many factors such as nutrient deficiency, infection and mutations in myofibrillar etc. The objective of this study is to identify the bio-markers to distinguish various muscle mutants in Drosophila (fruit fly) using Raman Spectroscopy. Principal Components based Linear Discriminant Analysis (PC-LDA) classification model yielding >95% accuracy was developed to classify such different mutants representing various myopathies according to their physiopathology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study consistency properties of surrogate loss functions for general multiclass classification problems, defined by a general loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be classification calibrated with respect to a loss matrix in this setting. We then introduce the notion of \emph{classification calibration dimension} of a multiclass loss matrix, which measures the smallest `size' of a prediction space for which it is possible to design a convex surrogate that is classification calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, as one application, we provide a different route from the recent result of Duchi et al.\ (2010) for analyzing the difficulty of designing `low-dimensional' convex surrogates that are consistent with respect to pairwise subset ranking losses. We anticipate the classification calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of developing privacy-preserving machine learning algorithms in a dis-tributed multiparty setting. Here different parties own different parts of a data set, and the goal is to learn a classifier from the entire data set with-out any party revealing any information about the individual data points it owns. Pathak et al [7]recently proposed a solution to this problem in which each party learns a local classifier from its own data, and a third party then aggregates these classifiers in a privacy-preserving manner using a cryptographic scheme. The generaliza-tion performance of their algorithm is sensitive to the number of parties and the relative frac-tions of data owned by the different parties. In this paper, we describe a new differentially pri-vate algorithm for the multiparty setting that uses a stochastic gradient descent based procedure to directly optimize the overall multiparty ob-jective rather than combining classifiers learned from optimizing local objectives. The algorithm achieves a slightly weaker form of differential privacy than that of [7], but provides improved generalization guarantees that do not depend on the number of parties or the relative sizes of the individual data sets. Experimental results corrob-orate our theoretical findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transductive SVM (TSVM) is a well known semi-supervised large margin learning method for binary text classification. In this paper we extend this method to multi-class and hierarchical classification problems. We point out that the determination of labels of unlabeled examples with fixed classifier weights is a linear programming problem. We devise an efficient technique for solving it. The method is applicable to general loss functions. We demonstrate the value of the new method using large margin loss on a number of multi-class and hierarchical classification datasets. For maxent loss we show empirically that our method is better than expectation regularization/constraint and posterior regularization methods, and competitive with the version of entropy regularization method which uses label constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 mu m. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 mu m mark. (C) 2013 AIP Publishing LLC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The demand for energy efficient, low weight structures has boosted the use of composite structures assembled using increased quantities of structural adhesives. Bonded structures may be subjected to severe working environments such as high temperature and moisture due to which the adhesive gets degraded over a period of time. This reduces the strength of a joint and leads to premature failure. Measurement of strains in the adhesive bondline at any point of time during service may be beneficial as an assessment can be made on the integrity of a joint and necessary preventive actions may be taken before failure. This paper presents an experimental approach of measuring peel and shear strains in the adhesive bondline of composite single-lap joints using digital image correlation. Different sets of composite adhesive joints with varied bond quality were prepared and subjected to tensile load during which digital images were taken and processed using digital image correlation software. The measured peel strain at the joint edge showed a rapid increase with the initiation of a crack till failure of the joint. The measured strains were used to compute the corresponding stresses assuming a plane strain condition and the results were compared with stresses predicted using theoretical models, namely linear and nonlinear adhesive beam models. A similar trend in stress distribution was observed. Further comparison of peel and shear strains also exhibited similar trend for both healthy and degraded joints. Maximum peel stress failure criterion was used to predict the failure load of a composite adhesive joint and a comparison was made between predicted and actual failure loads. The predicted failure loads from theoretical models were found to be higher than the actual failure load for all the joints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Myopathies are muscular diseases in which muscle fibers degenerate due to many factors such as nutrient deficiency, infection and mutations in myofibrillar etc. The objective of this study is to identify the bio-markers to distinguish various muscle mutants in Drosophila (fruit fly) using Raman Spectroscopy. Principal Components based Linear Discriminant Analysis (PC-LDA) classification model yielding >95% accuracy was developed to classify such different mutants representing various myopathies according to their physiopathology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluorescence microscopy has become an indispensable tool in cell biology research due its exceptional specificity and ability to visualize subcellular structures with high contrast. It has highest impact when applied in 4D mode, i.e. when applied to record 3D image information as a function of time, since it allows the study of dynamic cellular processes in their native environment. The main issue in 4D fluorescence microscopy is that the phototoxic effect of fluorescence excitation gets accumulated during 4D image acquisition to the extent that normal cell functions are altered. Hence to avoid the alteration of normal cell functioning, it is required to minimize the excitation dose used for individual 2D images constituting a 4D image. Consequently, the noise level becomes very high degrading the resolution. According to the current status of technology, there is a minimum required excitation dose to ensure a resolution that is adequate for biological investigations. This minimum is sufficient to damage light-sensitive cells such as yeast if 4D imaging is performed for an extended period of time, for example, imaging for a complete cell cycle. Nevertheless, our recently developed deconvolution method resolves this conflict forming an enabling technology for visualization of dynamical processes of light-sensitive cells for durations longer than ever without perturbing normal cell functioning. The main goal of this article is to emphasize that there are still possibilities for enabling newer kinds of experiment in cell biology research involving even longer 4D imaging, by only improving deconvolution methods without any new optical technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sparse recovery methods utilize the l(p)-normbased regularization in the estimation problem with 0 <= p <= 1. These methods have a better utility when the number of independent measurements are limited in nature, which is a typical case for diffuse optical tomographic image reconstruction problem. These sparse recovery methods, along with an approximation to utilize the l(0)-norm, have been deployed for the reconstruction of diffuse optical images. Their performancewas compared systematically using both numerical and gelatin phantom cases to show that these methods hold promise in improving the reconstructed image quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we have proposed a simple and effective approach to classify H.264 compressed videos, by capturing orientation information from the motion vectors. Our major contribution involves computing Histogram of Oriented Motion Vectors (HOMV) for overlapping hierarchical Space-Time cubes. The Space-Time cubes selected are partially overlapped. HOMV is found to be very effective to define the motion characteristics of these cubes. We then use Bag of Features (B OF) approach to define the video as histogram of HOMV keywords, obtained using k-means clustering. The video feature, thus computed, is found to be very effective in classifying videos. We demonstrate our results with experiments on two large publicly available video database.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sparse representation based classification (SRC) is one of the most successful methods that has been developed in recent times for face recognition. Optimal projection for Sparse representation based classification (OPSRC)1] provides a dimensionality reduction map that is supposed to give optimum performance for SRC framework. However, the computational complexity involved in this method is too high. Here, we propose a new projection technique using the data scatter matrix which is computationally superior to the optimal projection method with comparable classification accuracy with respect OPSRC. The performance of the proposed approach is benchmarked with various publicly available face database.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elastic Net Regularizers have shown much promise in designing sparse classifiers for linear classification. In this work, we propose an alternating optimization approach to solve the dual problems of elastic net regularized linear classification Support Vector Machines (SVMs) and logistic regression (LR). One of the sub-problems turns out to be a simple projection. The other sub-problem can be solved using dual coordinate descent methods developed for non-sparse L2-regularized linear SVMs and LR, without altering their iteration complexity and convergence properties. Experiments on very large datasets indicate that the proposed dual coordinate descent - projection (DCD-P) methods are fast and achieve comparable generalization performance after the first pass through the data, with extremely sparse models.