148 resultados para Projection Methods

em Indian Institute of Science - Bangalore - Índia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lateral or transaxial truncation of cone-beam data can occur either due to the field of view limitation of the scanning apparatus or iregion-of-interest tomography. In this paper, we Suggest two new methods to handle lateral truncation in helical scan CT. It is seen that reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view. A row-by-row data completion approach using linear prediction is introduced for helical scan truncated data. An extension of this technique known as windowed linear prediction approach is introduced. Efficacy of the two techniques are shown using simulation with standard phantoms. A quantitative image quality measure of the resulting reconstructed images are used to evaluate the performance of the proposed methods against an extension of a standard existing technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method to reliably extract object profiles even with height discontinuities (that leads to 2n pi phase jumps) is proposed. This method uses Fourier transform profilometry to extract wrapped phase, and an additional image formed by illuminating the object of interest by a novel gray coded pattern for phase unwrapping. Simulation results suggest that the proposed approach not only retains the advantages of the original method, but also contributes significantly in the enhancement of its performance. Fundamental advantage of this method stems from the fact that both extraction of wrapped phase and unwrapping the same were done by gray scale images. Hence, unlike the methods that use colors, proposed method doesn't demand a color CCD camera and is ideal for profiling objects with multiple colors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe here two non-interferometric methods for the estimation of the phase of transmitted wavefronts through refracting objects. The phase of the wavefronts obtained is used to reconstruct either the refractive index distribution of the objects or their contours. Refraction corrected reconstructions are obtained by the application of an iterative loop incorporating digital ray tracing for forward propagation and a modified filtered back projection (FBP) for reconstruction. The FBP is modified to take into account non-straight path propagation of light through the object. When the iteration stagnates, the difference between the projection data and an estimate of it obtained by ray tracing through the final reconstruction is reconstructed using a diffraction tomography algorithm. The reconstruction so obtained, viewed as a correction term, is added to the estimate of the object from the loop to obtain an improved final refractive index reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In general the objective of accurately encoding the input data and the objective of extracting good features to facilitate classification are not consistent with each other. As a result, good encoding methods may not be effective mechanisms for classification. In this paper, an earlier proposed unsupervised feature extraction mechanism for pattern classification has been extended to obtain an invertible map. The method of bimodal projection-based features was inspired by the general class of methods called projection pursuit. The principle of projection pursuit concentrates on projections that discriminate between clusters and not faithful representations. The basic feature map obtained by the method of bimodal projections has been extended to overcome this. The extended feature map is an embedding of the input space in the feature space. As a result, the inverse map exists and hence the representation of the input space in the feature space is exact. This map can be naturally expressed as a feedforward neural network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sparse representation based classification (SRC) is one of the most successful methods that has been developed in recent times for face recognition. Optimal projection for Sparse representation based classification (OPSRC)1] provides a dimensionality reduction map that is supposed to give optimum performance for SRC framework. However, the computational complexity involved in this method is too high. Here, we propose a new projection technique using the data scatter matrix which is computationally superior to the optimal projection method with comparable classification accuracy with respect OPSRC. The performance of the proposed approach is benchmarked with various publicly available face database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel Projection Error Propagation-based Regularization (PEPR) method is proposed to improve the image quality in Electrical Impedance Tomography (EIT). PEPR method defines the regularization parameter as a function of the projection error developed by difference between experimental measurements and calculated data. The regularization parameter in the reconstruction algorithm gets modified automatically according to the noise level in measured data and ill-posedness of the Hessian matrix. Resistivity imaging of practical phantoms in a Model Based Iterative Image Reconstruction (MoBIIR) algorithm as well as with Electrical Impedance Diffuse Optical Reconstruction Software (EIDORS) with PEPR. The effect of PEPR method is also studied with phantoms with different configurations and with different current injection methods. All the resistivity images reconstructed with PEPR method are compared with the single step regularization (STR) and Modified Levenberg Regularization (LMR) techniques. The results show that, the PEPR technique reduces the projection error and solution error in each iterations both for simulated and experimental data in both the algorithms and improves the reconstructed images with better contrast to noise ratio (CNR), percentage of contrast recovery (PCR), coefficient of contrast (COC) and diametric resistivity profile (DRP). (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of techniques for scaling up classifiers so that they can be applied to problems with large datasets of training examples is one of the objectives of data mining. Recently, AdaBoost has become popular among machine learning community thanks to its promising results across a variety of applications. However, training AdaBoost on large datasets is a major problem, especially when the dimensionality of the data is very high. This paper discusses the effect of high dimensionality on the training process of AdaBoost. Two preprocessing options to reduce dimensionality, namely the principal component analysis and random projection are briefly examined. Random projection subject to a probabilistic length preserving transformation is explored further as a computationally light preprocessing step. The experimental results obtained demonstrate the effectiveness of the proposed training process for handling high dimensional large datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interaction of tetrathiafulvalene (TTF) and tetracyanoethylene (TCNE) with few-layer graphene samples prepared by the exfoliation of graphite oxide (EG), conversion of nanodiamond (DG) and arc-evaporation of graphite in hydrogen (HG) has been investigated by Raman spectroscopy to understand the role of the graphene surface. The position and full-width at half maximum of the Raman G-band are affected on interaction with TTF and TCNE and the effect is highest with EG and least with HG. The effect of TTF and TCNE on the 2D-band is also maximum with EG. The magnitude of interaction between the donor/acceptor molecules varies in the same order as the surface areas of the graphenes. (C) 2009 Published by Elsevier B. V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conformational preferences of thiocarbonohydrazide (H2NNHCSNHNH2) in its basic and N,N′-diprotonated forms are examined by calculating the barrier to internal rotation around the C---N bonds, using the theoretical LCAO—MO (ab initio and semiempirical CNDO and EHT) methods. The calculated and experimental results are compared with each other and also with values for N,N′-dimethylthiourea which is isoelectronic with thiocarbonohydrazide. The suitability of these methods for studying rotational isomerism seems suspect when lone pair interactions are present.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One difficulty in summarising biological survivorship data is that the hazard rates are often neither constant nor increasing with time or decreasing with time in the entire life span. The promising Weibull model does not work here. The paper demonstrates how bath tub shaped quadratic models may be used in such a case. Further, sometimes due to a paucity of data actual lifetimes are not as certainable. It is shown how a concept from queuing theory namely first in first out (FIFO) can be profitably used here. Another nonstandard situation considered is one in which lifespan of the individual entity is too long compared to duration of the experiment. This situation is dealt with, by using ancilliary information. In each case the methodology is illustrated with numerical examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A comparison is made of the performance of a weather Doppler radar with a staggered pulse repetition time and a radar with a random (but known) phase. As a standard for this comparison, the specifications of the forthcoming next generation weather radar (NEXRAD) are used. A statistical analysis of the spectral momentestimates for the staggered scheme is developed, and a theoretical expression for the signal-to-noise ratio due to recohering-filteringrecohering for the random phase radar is obtained. Algorithms for assignment of correct ranges to pertinent spectral moments for both techniques are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.