23 resultados para Paleoenvironmental and paleodietary reconstruction
em Indian Institute of Science - Bangalore - Índia
Resumo:
Spatial resolution in photoacoustic and thermoacoustic tomography is ultrasound transducer (detector) bandwidth limited. For a circular scanning geometry the axial (radial) resolution is not affected by the detector aperture, but the tangential (lateral) resolution is highly dependent on the aperture size, and it is also spatially varying (depending on the location relative to the scanning center). Several approaches have been reported to counter this problem by physically attaching a negative acoustic lens in front of the nonfocused transducer or by using virtual point detectors. Here, we have implemented a modified delay-and-sum reconstruction method, which takes into account the large aperture of the detector, leading to more than fivefold improvement in the tangential resolution in photoacoustic (and thermoacoustic) tomography. Three different types of numerical phantoms were used to validate our reconstruction method. It is also shown that we were able to preserve the shape of the reconstructed objects with the modified algorithm. (C) 2014 Optical Society of America
Resumo:
Non-uniform sampling of a signal is formulated as an optimization problem which minimizes the reconstruction signal error. Dynamic programming (DP) has been used to solve this problem efficiently for a finite duration signal. Further, the optimum samples are quantized to realize a speech coder. The quantizer and the DP based optimum search for non-uniform samples (DP-NUS) can be combined in a closed-loop manner, which provides distinct advantage over the open-loop formulation. The DP-NUS formulation provides a useful control over the trade-off between bitrate and performance (reconstruction error). It is shown that 5-10 dB SNR improvement is possible using DP-NUS compared to extrema sampling approach. In addition, the close-loop DP-NUS gives a 4-5 dB improvement in reconstruction error.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time 0(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time 0(n(3+2/k)), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega)) bound. We also present a 2-approximation algorithm with O(m(omega) root n log n) expected running time, a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
In this paper, we present a wavelet - based approach to solve the non-linear perturbation equation encountered in optical tomography. A particularly suitable data gathering geometry is used to gather a data set consisting of differential changes in intensity owing to the presence of the inhomogeneous regions. With this scheme, the unknown image, the data, as well as the weight matrix are all represented by wavelet expansions, thus yielding the representation of the original non - linear perturbation equation in the wavelet domain. The advantage in use of the non-linear perturbation equation is that there is no need to recompute the derivatives during the entire reconstruction process. Once the derivatives are computed, they are transformed into the wavelet domain. The purpose of going to the wavelet domain, is that, it has an inherent localization and de-noising property. The use of approximation coefficients, without the detail coefficients, is ideally suited for diffuse optical tomographic reconstructions, as the diffusion equation removes most of the high frequency information and the reconstruction appears low-pass filtered. We demonstrate through numerical simulations, that through solving merely the approximation coefficients one can reconstruct an image which has the same information content as the reconstruction from a non-waveletized procedure. In addition we demonstrate a better noise tolerance and much reduced computation time for reconstructions from this approach.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected non-negative edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. Although in most such applications any cycle basis can be used, a low weight cycle basis often translates to better performance and/or numerical stability. Despite the fact that the problem can be solved exactly in polynomial time, we design approximation algorithms since the performance of the exact algorithms may be too expensive for some practical applications. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time O(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time O(n(3+2/k) ), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega) ) bound. We also present a 2-approximation algorithm with expected running time O(M-omega root n log n), a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
Compressive Sensing (CS) is a new sensing paradigm which permits sampling of a signal at its intrinsic information rate which could be much lower than Nyquist rate, while guaranteeing good quality reconstruction for signals sparse in a linear transform domain. We explore the application of CS formulation to music signals. Since music signals comprise of both tonal and transient nature, we examine several transforms such as discrete cosine transform (DCT), discrete wavelet transform (DWT), Fourier basis and also non-orthogonal warped transforms to explore the effectiveness of CS theory and the reconstruction algorithms. We show that for a given sparsity level, DCT, overcomplete, and warped Fourier dictionaries result in better reconstruction, and warped Fourier dictionary gives perceptually better reconstruction. “MUSHRA” test results show that a moderate quality reconstruction is possible with about half the Nyquist sampling.
Resumo:
Major emphasis, in compressed sensing (CS) research, has been on the acquisition of sub-Nyquist number of samples of a signal that has a sparse representation on some tight frame or an orthogonal basis, and subsequent reconstruction of the original signal using a plethora of recovery algorithms. In this paper, we present compressed sensing data acquisition from a different perspective, wherein a set of signals are reconstructed at a sampling rate which is a multiple of the sampling rate of the ADCs that are used to measure the signals. We illustrate how this can facilitate usage of anti-aliasing filters with relaxed frequency specifications and, consequently, of lower order.
Resumo:
To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.
Resumo:
In this paper, we propose a super resolution (SR) method for synthetic images using FeatureMatch. Existing state-of-the-art super resolution methods are learning based methods, where a pair of low-resolution and high-resolution dictionary pair are trained, and this trained pair is used to replace patches in low-resolution image with appropriate matching patches from the high-resolution dictionary. In this paper, we show that by using Approximate Nearest Neighbour Fields (ANNF), and a common source image, we can by-pass the learning phase, and use a single image for dictionary. Thus, reducing the dictionary from a collection obtained from hundreds of training images, to a single image. We show that by modifying the latest developments in ANNF computation, to suit super resolution, we can perform much faster and more accurate SR than existing techniques. To establish this claim we will compare our algorithm against various state-of-the-art algorithms, and show that we are able to achieve better and faster reconstruction without any training phase.
Resumo:
We present here the first statistically calibrated and verified tree-ring reconstruction of climate from continental Southeast Asia.The reconstructed variable is March-May (MAM) Palmer Drought Severity Index (PDSI) based on ring widths from 22 trees (42 radial cores) of rare and long-lived conifer, Fokienia hodginsii (Po Mu as locally called) from northern Vietnam. This is the first published tree ring chronology from Vietnam as well as the first for this species. Spanning 535 years, this is the longest cross-dated tree-ring series yet produced from continental Southeast Asia. Response analysis revealed that the annual growth of Fokienia at this site was mostly governed by soil moisture in the pre-monsoon season. The reconstruction passed the calibration-verification tests commonly used in dendroclimatology, and revealed two prominent periods of drought in the mid-eighteenth and late-nineteenth enturies. The former lasted nearly 30 years and was concurrent with a similar drought over northwestern Thailand inferred from teak rings, suggesting a ``mega-drought'' extending across Indochina in the eighteenth century. Both of our reconstructed droughts are consistent with the periods of warm sea surface temperature (SST)anomalies in the tropical Pacific. Spatial correlation analyses with global SST indicated that ENSO-like anomalies might play a role in modulating droughts over the region, with El Nio (warm) phases resulting in reduced rainfall. However, significant correlation was also seen with SST over the Indian Ocean and the north Pacific,suggesting that ENSO is not the only factor affecting the climate of the area. Spectral analyses revealed significant peaks in the range of 53.9-78.8 years as well as in the ENSO-variability range of 2.0 to 3.2 years.
Resumo:
The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances. (C) 2010 Optical Society of America
Resumo:
In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Tutte (1979) proved that the disconnected spanning subgraphs of a graph can be reconstructed from its vertex deck. This result is used to prove that if we can reconstruct a set of connected graphs from the shuffled edge deck (SED) then the vertex reconstruction conjecture is true. It is proved that a set of connected graphs can be reconstructed from the SED when all the graphs in the set are claw-free or all are P-4-free. Such a problem is also solved for a large subclass of the class of chordal graphs. This subclass contains maximal outerplanar graphs. Finally, two new conjectures, which imply the edge reconstruction conjecture, are presented. Conjecture 1 demands a construction of a stronger k-edge hypomorphism (to be defined later) from the edge hypomorphism. It is well known that the Nash-Williams' theorem applies to a variety of structures. To prove Conjecture 2, we need to incorporate more graph theoretic information in the Nash-Williams' theorem.
Resumo:
A claw is an induced subgraph isomorphic to K-1,K-3. The claw-point is the point of degree 3 in a claw. A graph is called p-claw-free when no p-cycle has a claw-point on it. It is proved that for p greater than or equal to 4, p-claw-free graphs containing at least one chordless p-cycle are edge reconstructible. It is also proved that chordal graphs are edge reconstructible. These two results together imply the edge reconstructibility of claw-free graphs. A simple proof of vertex reconstructibility of P-4-reducible graphs is also presented. (C) 1995 John Wiley and Sons, Inc.