975 resultados para Deformable image registration
Resumo:
This paper describes a spatio-temporal registration approach for speech articulation data obtained from electromagnetic articulography (EMA) and real-time Magnetic Resonance Imaging (rtMRI). This is motivated by the potential for combining the complementary advantages of both types of data. The registration method is validated on EMA and rtMRI datasets obtained at different times, but using the same stimuli. The aligned corpus offers the advantages of high temporal resolution (from EMA) and a complete mid-sagittal view (from rtMRI). The co-registration also yields optimum placement of EMA sensors as articulatory landmarks on the magnetic resonance images, thus providing richer spatio-temporal information about articulatory dynamics. (C) 2014 Acoustical Society of America
Resumo:
A micro-newton static force sensor is presented here as a packaged product. The sensor, which is based on the mechanics of deformable objects, consists of a compliant mechanism that amplifies the displacement caused by the force that is to be measured. The output displacement, captured using a digital microscope and analyzed using image processing techniques, is used to calculate the force using precalibrated force-displacement curve. Images are scanned in real time at a frequency of 15 frames per second and sampled at around half the scanning frequency. The sensor was built, packaged, calibrated, and tested. It has simulated and measured stiffness values of 2.60N/m and 2.57N/m, respectively. The smallest force it can reliably measure in the presence of noise is about 2 mu N over a range of 1.4mN. The off-the-shelf digital microscope aside, all of its other components are purely mechanical; they are inexpensive and can be easily made using simple machines. Another highlight of the sensor is that its movable and delicate components are easily replaceable. The sensor can be used in aqueous environment as it does not use electric, magnetic, thermal, or any other fields. Currently, it can only measure static forces or forces that vary at less than 1Hz because its response time and bandwidth are limited by the speed of imaging with a camera. With a universal serial bus (USB) connection of its digital microscope, custom-developed graphical user interface (GUI), and related software, the sensor is fully developed as a readily usable product.
Resumo:
This paper discusses an approach for river mapping and flood evaluation to aid multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation to extract water covered region. Analysis of Moderate Resolution Imaging Spectroradiometer (MODIS) satellite images is applied in two stages: before flood and during flood. For these images the extraction of water region utilizes spectral information for image classification and spatial information for image segmentation. Multi-temporal MODIS images from ``normal'' (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as artificial neural networks and gene expression programming to separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water region. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification and region-based segmentation is an accurate and reliable for the extraction of water-covered region.
Resumo:
Simulated boundary potential data for Electrical Impedance Tomography (EIT) are generated by a MATLAB based EIT data generator and the resistivity reconstruction is evaluated with Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software (EIDORS). Circular domains containing subdomains as inhomogeneity are defined in MATLAB-based EIT data generator and the boundary data are calculated by a constant current simulation with opposite current injection (OCI) method. The resistivity images reconstructed for different boundary data sets and images are analyzed with image parameters to evaluate the reconstruction.
Resumo:
Magnetic Resonance Imaging (MRI) has been widely used in cancer treatment planning, which takes the advantage of high-resolution and high-contrast provided by it. The raw data collected in the MRI can also be used to obtain the temperature maps and has been explored for performing MR thermometry. This review article describes the methods that are used in performing MR thermometry, with an emphasis on reconstruction methods that are useful to obtain these temperature maps in real-time for large region of interest. This article also proposes a prior-image constrained reconstruction method for temperature reconstruction in MR thermometry, and a systematic comparison using ex-vivo tissue experiments with state of the art reconstruction method is presented.
Resumo:
The linear stability analysis of a plane Couette flow of an Oldroyd-B viscoelastic fluid past a flexible solid medium is carried out to investigate the role of polymer addition in the stability behavior. The system consists of a viscoelastic fluid layer of thickness R, density rho, viscosity eta, relaxation time lambda, and retardation time beta lambda flowing past a linear elastic solid medium of thickness HR, density rho, and shear modulus G. The emphasis is on the high-Reynolds-number wall-mode instability, which has recently been shown in experiments to destabilize the laminar flow of Newtonian fluids in soft-walled tubes and channels at a significantly lower Reynolds number than that for flows in rigid conduits. For Newtonian fluids, the linear stability studies have shown that the wall modes become unstable when flow Reynolds number exceeds a certain critical value Re c which scales as Sigma(3/4), where Reynolds number Re = rho VR/eta, V is the top-plate velocity, and dimensionless parameter Sigma = rho GR(2)/eta(2) characterizes the fluid-solid system. For high-Reynolds-number flow, the addition of polymer tends to decrease the critical Reynolds number in comparison to that for the Newtonian fluid, indicating a destabilizing role for fluid viscoelasticity. Numerical calculations show that the critical Reynolds number could be decreased by up to a factor of 10 by the addition of small amount of polymer. The critical Reynolds number follows the same scaling Re-c similar to Sigma(3/4) as the wall modes for a Newtonian fluid for very high Reynolds number. However, for moderate Reynolds number, there exists a narrow region in beta-H parametric space, corresponding to very dilute polymer solution (0.9 less than or similar to beta < 1) and thin solids (H less than or similar to 1.1), in which the addition of polymer tends to increase the critical Reynolds number in comparison to the Newtonian fluid. Thus, Reynolds number and polymer properties can be tailored to either increase or decrease the critical Reynolds number for unstable modes, thus providing an additional degree of control over the laminar-turbulent transition.
Resumo:
Results from interface shear tests on sand-geosynthetic interfaces are examined in light of surface roughness of the interacting geosynthetic material. Three different types of interface shear tests carried out in the frame of direct shear-test setup are compared to understand the effect of parameters like box fixity and symmetry on the interface shear characteristics. Formation of shear bands close to the interface is visualized in the tests and the bands are analyzed using image-segmentation techniques in MATLAB. A woven geotextile with moderate roughness and a geomembrane with minimal roughness are used in the tests. The effect of surface roughness of the geosynthetic material on the formation of shear bands, movement of sand particles, and interface shear parameters are studied and compared through visual observations, image analyses, and image-segmentation techniques.
Resumo:
To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.
Resumo:
Representing images and videos in the form of compact codes has emerged as an important research interest in the vision community, in the context of web scale image/video search. Recently proposed Vector of Locally Aggregated Descriptors (VLAD), has been shown to outperform the existing retrieval techniques, while giving a desired compact representation. VLAD aggregates the local features of an image in the feature space. In this paper, we propose to represent the local features extracted from an image, as sparse codes over an over-complete dictionary, which is obtained by K-SVD based dictionary training algorithm. The proposed VLAD aggregates the residuals in the space of these sparse codes, to obtain a compact representation for the image. Experiments are performed over the `Holidays' database using SIFT features. The performance of the proposed method is compared with the original VLAD. The 4% increment in the mean average precision (mAP) indicates the better retrieval performance of the proposed sparse coding based VLAD.
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Resumo:
An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy). A comparative study of the proposed technique with the state-of-art maximum likelihood (ML) and maximum-a-posteriori (MAP) with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED. (C) 2015 Author(s).
Resumo:
Consider N points in R-d and M local coordinate systems that are related through unknown rigid transforms. For each point, we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the N points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The least-squares formulation of this problem, although nonconvex, has a well-known closed-form solution when M = 2 (based on the singular value decomposition (SVD)). However, no closed-form solution is known for M >= 3. In this paper, we demonstrate how the least-squares formulation can be relaxed into a convex program, namely, a semidefinite program (SDP). By setting up connections between the uniqueness of this SDP and results from rigidity theory, we prove conditions for exact and stable recovery for the SDP relaxation. In particular, we prove that the SDP relaxation can guarantee recovery under more adversarial conditions compared to earlier proposed spectral relaxations, and we derive error bounds for the registration error incurred by the SDP relaxation. We also present results of numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the SDP (i.e., we are able to solve the original nonconvex least-squares problem) up to a certain noise threshold, and (b) the SDP performs significantly better than spectral and manifold-optimization methods, particularly at large noise levels.
Resumo:
In this paper, we propose a super resolution (SR) method for synthetic images using FeatureMatch. Existing state-of-the-art super resolution methods are learning based methods, where a pair of low-resolution and high-resolution dictionary pair are trained, and this trained pair is used to replace patches in low-resolution image with appropriate matching patches from the high-resolution dictionary. In this paper, we show that by using Approximate Nearest Neighbour Fields (ANNF), and a common source image, we can by-pass the learning phase, and use a single image for dictionary. Thus, reducing the dictionary from a collection obtained from hundreds of training images, to a single image. We show that by modifying the latest developments in ANNF computation, to suit super resolution, we can perform much faster and more accurate SR than existing techniques. To establish this claim we will compare our algorithm against various state-of-the-art algorithms, and show that we are able to achieve better and faster reconstruction without any training phase.
Resumo:
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is similar to 200-fold faster (for large dataset) when compared to existing CPU based systems. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.