127 resultados para VOLUME TOMOGRAPHY
Resumo:
We address the reconstruction problem in frequency-domain optical-coherence tomography (FDOCT) from under-sampled measurements within the framework of compressed sensing (CS). Specifically, we propose optimal sparsifying bases for accurate reconstruction by analyzing the backscattered signal model. Although one might expect Fourier bases to be optimal for the FDOCT reconstruction problem, it turns out that the optimal sparsifying bases are windowed cosine functions where the window is the magnitude spectrum of the laser source. Further, the windowed cosine bases can be phase locked, which allows one to obtain higher accuracy in reconstruction. We present experimental validations on real data. The findings reported in this Letter are useful for optimal dictionary design within the framework of CS-FDOCT. (C) 2012 Optical Society of America
Resumo:
The discrepancy between the X-ray and NMR structures of Mycobacterium tuberculosis peptidyl-tRNA hydrolase in relation to the functionally important plasticity of the molecule led to molecular dynamics simulations. The X-ray and the NMR studies along with the simulations indicated an inverse correlation between crowding and molecular volume. A detailed comparison of proteins for which X-ray and the NMR structures appears to confirm this correlation. In consonance with the reported results of the investigations in cellular compartments and aqueous solution, the comparison indicates that the crowding results in compaction of the molecule as well as change in its shape, which could specifically involve regions of the molecule important in function. Crowding could thus influence the action of proteins through modulation of the functionally important plasticity of the molecule. Selvaraj M, Ahmad R, Varshney U and Vijayan M 2012 Crowding, molecular volume and plasticity: An assessment involving crystallography, NMR and simulations. J. Biosci. 37 953-963] DOI 10.1007/s12038-012-9276-5
Resumo:
Surface electrodes are essentially required to be switched for boundary data collection in electrical impedance tomography (Ell). Parallel digital data bits are required to operate the multiplexers used, generally, for electrode switching in ELT. More the electrodes in an EIT system more the digital data bits are needed. For a sixteen electrode system. 16 parallel digital data bits are required to operate the multiplexers in opposite or neighbouring current injection method. In this paper a common ground current injection is proposed for EIT and the resistivity imaging is studied. Common ground method needs only two analog multiplexers each of which need only 4 digital data bits and hence only 8 digital bits are required to switch the 16 surface electrodes. Results show that the USB based data acquisition system sequentially generate digital data required for multiplexers operating in common ground current injection method. The profile of the boundary data collected from practical phantom show that the multiplexers are operating in the required sequence in common ground current injection protocol. The voltage peaks obtained for all the inhomogeneity configurations are found at the accurate positions in the boundary data matrix which proved the sequential operation of multiplexers. Resistivity images reconstructed from the boundary data collected from the practical phantom with different configurations also show that the entire digital data generation module is functioning properly. Reconstructed images and their image parameters proved that the boundary data are successfully acquired by the DAQ system which in turn indicates a sequential and proper operation of multiplexers.
Resumo:
In this article, we investigate the performance of a volume integral equation code on BlueGene/L system. Volume integral equation (VIE) is solved for homogeneous and inhomogeneous dielectric objects for radar cross section (RCS) calculation in a highly parallel environment. Pulse basis functions and point matching technique is used to convert the volume integral equation into a set of simultaneous linear equations and is solved using parallel numerical library ScaLAPACK on IBM's distributed-memory supercomputer BlueGene/L by different number of processors to compare the speed-up and test the scalability of the code.
Resumo:
We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.
Resumo:
The inverse problem in photoacoustic tomography (PAT) seeks to obtain the absorbed energy map from the boundary pressure measurements for which computationally intensive iterative algorithms exist. The computational challenge is heightened when the reconstruction is done using boundary data split into its frequency spectrum to improve source localization and conditioning of the inverse problem. The key idea of this work is to modify the update equation wherein the Jacobian and the perturbation in data are summed over all wave numbers, k, and inverted only once to recover the absorbed energy map. This leads to a considerable reduction in the overall computation time. The results obtained using simulated data, demonstrates the efficiency of the proposed scheme without compromising the accuracy of reconstruction.
Resumo:
A methodology for measurement of planar liquid volume fraction in dense sprays using a combination of Planar Laser-Induced Fluorescence (PLIF) and Particle/Droplet Imaging Analysis (PDIA) is presented in this work. The PLIF images are corrected for loss of signal intensity due to laser sheet scattering, absorption and auto-absorption. The key aspect of this work pertains to simultaneously solving the equations involving the corrected PLIF signal and liquid volume fraction. From this, a quantitative estimate of the planar liquid volume fraction is obtained. The corrected PLIF signal and the corrected planar Mie scattering can be also used together to obtain the Sauter Mean Diameter (SMD) distribution by using data from the PDIA technique at a particular location for calibration. This methodology is applied to non-evaporating sprays of diesel and a more viscous pure plant oil at an injection pressure of 1000 bar and a gas pressure of 30 bar in a high pressure chamber. These two fuels are selected since their viscosity values are very different with a consequently very different spray structure. The spatial distribution of liquid volume fraction and SMD is obtained for two fuels. The proposed method is validated by comparing liquid volume fraction obtained by the current method with data from PDIA technique. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]
Resumo:
The classical Chapman-Enskog expansion is performed for the recently proposed finite-volume formulation of lattice Boltzmann equation (LBE) method D.V. Patil, K.N. Lakshmisha, Finite volume TVD formulation of lattice Boltzmann simulation on unstructured mesh, J. Comput. Phys. 228 (2009) 5262-5279]. First, a modified partial differential equation is derived from a numerical approximation of the discrete Boltzmann equation. Then, the multi-scale, small parameter expansion is followed to recover the continuity and the Navier-Stokes (NS) equations with additional error terms. The expression for apparent value of the kinematic viscosity is derived for finite-volume formulation under certain assumptions. The attenuation of a shear wave, Taylor-Green vortex flow and driven channel flow are studied to analyze the apparent viscosity relation.
Resumo:
The standard method of quantum state tomography (QST) relies on the measurement of a set of noncommuting observables, realized in a series of independent experiments. Ancilla-assisted QST (AAQST) proposed by Nieuwenhuizen and co-workers Phys. Rev. Lett. 92, 120402 (2004)] greatly reduces the number of independent measurements by exploiting an ancilla register in a known initial state. In suitable conditions AAQST allows mapping out density matrix of an input register in a single experiment. Here we describe methods for explicit construction of AAQST experiments in multiqubit registers. We also report nuclear magnetic resonance studies on AAQST of (i) a two-qubit input register using a one-qubit ancilla in an isotropic liquid-state system and (ii) a three-qubit input register using a two-qubit ancilla register in a partially oriented system. The experimental results confirm the effectiveness of AAQST in such multiqubit registers.
Resumo:
A new approach that can easily incorporate any generic penalty function into the diffuse optical tomographic image reconstruction is introduced to show the utility of nonquadratic penalty functions. The penalty functions that were used include quadratic (l(2)), absolute (l(1)), Cauchy, and Geman-McClure. The regularization parameter in each of these cases was obtained automatically by using the generalized cross-validation method. The reconstruction results were systematically compared with each other via utilization of quantitative metrics, such as relative error and Pearson correlation. The reconstruction results indicate that, while the quadratic penalty may be able to provide better separation between two closely spaced targets, its contrast recovery capability is limited, and the sparseness promoting penalties, such as l(1), Cauchy, and Geman-McClure have better utility in reconstructing high-contrast and complex-shaped targets, with the Geman-McClure penalty being the most optimal one. (C) 2013 Optical Society of America
Resumo:
Typical image-guided diffuse optical tomographic image reconstruction procedures involve reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem less ill-posed compared to traditional underdetermined cases. Still, the methods that are deployed in this case are same as those used for traditional diffuse optical image reconstruction, which involves a regularization term as well as computation of the Jacobian. A gradient-free Nelder-Mead simplex method is proposed here to perform the image reconstruction procedure and is shown to provide solutions that closely match ones obtained using established methods, even in highly noisy data. The proposed method also has the distinct advantage of being more efficient owing to being regularization free, involving only repeated forward calculations. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
Nearly pollution-free solutions of the Helmholtz equation for k-values corresponding to visible light are demonstrated and verified through experimentally measured forward scattered intensity from an optical fiber. Numerically accurate solutions are, in particular, obtained through a novel reformulation of the H-1 optimal Petrov-Galerkin weak form of the Helmholtz equation. Specifically, within a globally smooth polynomial reproducing framework, the compact and smooth test functions are so designed that their normal derivatives are zero everywhere on the local boundaries of their compact supports. This circumvents the need for a priori knowledge of the true solution on the support boundary and relieves the weak form of any jump boundary terms. For numerical demonstration of the above formulation, we used a multimode optical fiber in an index matching liquid as the object. The scattered intensity and its normal derivative are computed from the scattered field obtained by solving the Helmholtz equation, using the new formulation and the conventional finite element method. By comparing the results with the experimentally measured scattered intensity, the stability of the solution through the new formulation is demonstrated and its closeness to the experimental measurements verified.
Resumo:
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)