189 resultados para Image computation

em Indian Institute of Science - Bangalore - Índia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study of the essential features of piston rings in the cylinder liner of an internal combustion engine reveals that the lubrication problem posed by it is basically that of a slider bearing. According to steady-flow-hydrodynamics, viz. Image the oil film thickness becomes zero at the dead centre positions as the velocity, U = 0. In practice, however, such a phenomenon cannot be supported by consideration of the wear rates of pistion rings and cylinder liners. This can be explained by including the “squeeze” action term in the

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a growing and pruning radial basis function based no-reference (NR) image quality model for JPEG-coded images. The quality of the images are estimated without referring to their original images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity factors such as edge amplitude, edge length, background activity and background luminance. Image quality estimation involves computation of functional relationship between HVS features and subjective test scores. Here, the problem of quality estimation is transformed to a function approximation problem and solved using GAP-RBF network. GAP-RBF network uses sequential learning algorithm to approximate the functional relationship. The computational complexity and memory requirement are less in GAP-RBF algorithm compared to other batch learning algorithms. Also, the GAP-RBF algorithm finds a compact image quality model and does not require retraining when the new image samples are presented. Experimental results prove that the GAP-RBF image quality model does emulate the mean opinion score (MOS). The subjective test results of the proposed metric are compared with JPEG no-reference image quality index as well as full-reference structural similarity image quality index and it is observed to outperform both.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). Methods: DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Results: Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. Conclusions: We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data. (C) 2011 American Association of Physicists in Medicine. DOI: 10.1118/1.3531572]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents recursive algorithms for fast computation of Legendre and Zernike moments of a grey-level image intensity distribution. For a binary image, a contour integration method is developed for the evaluation of Legendre moments using only the boundary information. A method for recursive calculation of Zernike polynomial coefficients is also given. A square-to-circular image transformation scheme is introduced to minimize the computation involved in Zernike moment functions. The recursive formulae can also be used in inverse moment transforms to reconstruct the original image from moments. The mathematical framework of the algorithms is given in detail, and illustrated with binary and grey-level images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acoustic modeling using mixtures of multivariate Gaussians is the prevalent approach for many speech processing problems. Computing likelihoods against a large set of Gaussians is required as a part of many speech processing systems and it is the computationally dominant phase for Large Vocabulary Continuous Speech Recognition (LVCSR) systems. We express the likelihood computation as a multiplication of matrices representing augmented feature vectors and Gaussian parameters. The computational gain of this approach over traditional methods is by exploiting the structure of these matrices and efficient implementation of their multiplication. In particular, we explore direct low-rank approximation of the Gaussian parameter matrix and indirect derivation of low-rank factors of the Gaussian parameter matrix by optimum approximation of the likelihood matrix. We show that both the methods lead to similar speedups but the latter leads to far lesser impact on the recognition accuracy. Experiments on 1,138 work vocabulary RM1 task and 6,224 word vocabulary TIMIT task using Sphinx 3.7 system show that, for a typical case the matrix multiplication based approach leads to overall speedup of 46 % on RM1 task and 115 % for TIMIT task. Our low-rank approximation methods provide a way for trading off recognition accuracy for a further increase in computational performance extending overall speedups up to 61 % for RM1 and 119 % for TIMIT for an increase of word error rate (WER) from 3.2 to 3.5 % for RM1 and for no increase in WER for TIMIT. We also express pairwise Euclidean distance computation phase in Dynamic Time Warping (DTW) in terms of matrix multiplication leading to saving of approximately of computational operations. In our experiments using efficient implementation of matrix multiplication, this leads to a speedup of 5.6 in computing the pairwise Euclidean distances and overall speedup up to 3.25 for DTW.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typical image-guided diffuse optical tomographic image reconstruction procedures involve reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem less ill-posed compared to traditional underdetermined cases. Still, the methods that are deployed in this case are same as those used for traditional diffuse optical image reconstruction, which involves a regularization term as well as computation of the Jacobian. A gradient-free Nelder-Mead simplex method is proposed here to perform the image reconstruction procedure and is shown to provide solutions that closely match ones obtained using established methods, even in highly noisy data. The proposed method also has the distinct advantage of being more efficient owing to being regularization free, involving only repeated forward calculations. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a super resolution (SR) method for synthetic images using FeatureMatch. Existing state-of-the-art super resolution methods are learning based methods, where a pair of low-resolution and high-resolution dictionary pair are trained, and this trained pair is used to replace patches in low-resolution image with appropriate matching patches from the high-resolution dictionary. In this paper, we show that by using Approximate Nearest Neighbour Fields (ANNF), and a common source image, we can by-pass the learning phase, and use a single image for dictionary. Thus, reducing the dictionary from a collection obtained from hundreds of training images, to a single image. We show that by modifying the latest developments in ANNF computation, to suit super resolution, we can perform much faster and more accurate SR than existing techniques. To establish this claim we will compare our algorithm against various state-of-the-art algorithms, and show that we are able to achieve better and faster reconstruction without any training phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The bilateral filter is known to be quite effective in denoising images corrupted with small dosages of additive Gaussian noise. The denoising performance of the filter, however, is known to degrade quickly with the increase in noise level. Several adaptations of the filter have been proposed in the literature to address this shortcoming, but often at a substantial computational overhead. In this paper, we report a simple pre-processing step that can substantially improve the denoising performance of the bilateral filter, at almost no additional cost. The modified filter is designed to be robust at large noise levels, and often tends to perform poorly below a certain noise threshold. To get the best of the original and the modified filter, we propose to combine them in a weighted fashion, where the weights are chosen to minimize (a surrogate of) the oracle mean-squared-error (MSE). The optimally-weighted filter is thus guaranteed to perform better than either of the component filters in terms of the MSE, at all noise levels. We also provide a fast algorithm for the weighted filtering. Visual and quantitative denoising results on standard test images are reported which demonstrate that the improvement over the original filter is significant both visually and in terms of PSNR. Moreover, the denoising performance of the optimally-weighted bilateral filter is competitive with the computation-intensive non-local means filter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the proper computational approach for the estimation of strain energy release rates by modified crack closure integral (MCCI). In particular, in the estimation of consistent nodal force vectors used in the MCCI expressions for quarter-point singular elements (wherein all the nodal force vectors participate in computation of strain energy release rates by MCCI). The numerical example of a centre crack tension specimen under uniform loading is presented to illustrate the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of deciding whether the output of a boolean circuit is determined by a partial assignment to its inputs. This problem is easily shown to be hard, i.e., co-Image Image -complete. However, many of the consequences of a partial input assignment may be determined in linear time, by iterating the following step: if we know the values of some inputs to a gate, we can deduce the values of some outputs of that gate. This process of iteratively deducing some of the consequences of a partial assignment is called propagation. This paper explores the parallel complexity of propagation, i.e., the complexity of determining whether the output of a given boolean circuit is determined by propagating a given partial input assignment. We give a complete classification of the problem into those cases that are Image -complete and those that are unlikely to be Image complete.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Remote sensing provides a lucid and effective means for crop coverage identification. Crop coverage identification is a very important technique, as it provides vital information on the type and extent of crop cultivated in a particular area. This information has immense potential in the planning for further cultivation activities and for optimal usage of the available fertile land. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Further, image classification forms the core of the solution to the crop coverage identification problem. No single classifier can prove to satisfactorily classify all the basic crop cover mapping problems of a cultivated region. We present in this paper the experimental results of multiple classification techniques for the problem of crop cover mapping of a cultivated region. A detailed comparison of the algorithms inspired by social behaviour of insects and conventional statistical method for crop classification is presented in this paper. These include the Maximum Likelihood Classifier (MLC), Particle Swarm Optimisation (PSO) and Ant Colony Optimisation (ACO) techniques. The high resolution satellite image has been used for the experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A technique for computing the spectral and angular (both the zenith and azimuthal) distribution of the solar energy reaching the surface of earth and any other plane in the atmosphere has been developed. Here the computer code LOWTRAN is used for getting the atmospheric transmittances in conjunction with two approximate procedures: one based on the Eddington method and the other on van de Hulst's adding method, for solving the equation of radiative transfer to obtain the diffuse radiation in the cloud-free situation. The aerosol scattering phase functions are approximated by the Hyeney-Greenstein functions. When the equation of radiative transfer is solved using the adding method, the azimuthal and zenith angle dependence of the scattered radiation is evaluated, whereas when the Eddington technique is utilized only the total downward flux of scattered solar radiation is obtained. Results of the diffuse and beam components of solar radiation received on surface of earth compare very well with those computed by other methods such as the more exact calculations using spherical harmonics and when atmospheric conditions corresponding to that prevailing locally in a tropical location (as in India) are used as inputs the computed values agree closely with the measured values.