925 resultados para discrete tomography
Resumo:
The origin of the extragalactic gamma-ray background (EGRB) is still an open question, even nearly forty years after its discovery. The emission could originate either from truly diffuse processes or from unresolved point sources. Although the majority of the 271 point sources detected by EGRET (Energetic Gamma Ray Experiment Telescope) are unidentified, of the identified sources, blazars are the dominant candidates. Therefore, unresolved blazars may be considered the main contributor to the EGRB, and many studies have been carried out to understand their distribution, evolution and contribution to the EGRB. Considering that gamma-ray emission comes mostly from jets of blazars and that the jet emission decreases rapidly with increasing jet to line-of-sight angle, it is not surprising that EGRET was not able to detect many large inclination angle active galactic nuclei (AGNs). Though Fermi could only detect a few large inclination angle AGNs during the first three months of its survey, it is expected to detect many such sources in the near future. Since non-blazar AGNs are expected to have higher density as compared to blazars, these could also contribute significantly to the EGRB. In this paper, we discuss contributions from unresolved discrete sources including normal galaxies, starburst galaxies, blazars and off-axis AGNs to the EGRB.
Resumo:
We consider the problem of transmission of correlated discrete alphabet sources over a Gaussian Multiple Access Channel (GMAC). A distributed bit-to-Gaussian mapping is proposed which yields jointly Gaussian codewords. This can guarantee lossless transmission or lossy transmission with given distortions, if possible. The technique can be extended to the system with side information at the encoders and decoder.
Resumo:
We propose a self-regularized pseudo-time marching scheme to solve the ill-posed, nonlinear inverse problem associated with diffuse propagation of coherent light in a tissuelike object. In particular, in the context of diffuse correlation tomography (DCT), we consider the recovery of mechanical property distributions from partial and noisy boundary measurements of light intensity autocorrelation. We prove the existence of a minimizer for the Newton algorithm after establishing the existence of weak solutions for the forward equation of light amplitude autocorrelation and its Frechet derivative and adjoint. The asymptotic stability of the solution of the ordinary differential equation obtained through the introduction of the pseudo-time is also analyzed. We show that the asymptotic solution obtained through the pseudo-time marching converges to that optimal solution provided the Hessian of the forward equation is positive definite in the neighborhood of optimal solution. The superior noise tolerance and regularization-insensitive nature of pseudo-dynamic strategy are proved through numerical simulations in the context of both DCT and diffuse optical tomography. (C) 2010 Optical Society of America.
Resumo:
We address the issue of noise robustness of reconstruction techniques for frequency-domain optical-coherence tomography (FDOCT). We consider three reconstruction techniques: Fourier, iterative phase recovery, and cepstral techniques. We characterize the reconstructions in terms of their statistical bias and variance and obtain approximate analytical expressions under the assumption of small noise. We also perform Monte Carlo analyses and show that the experimental results are in agreement with the theoretical predictions. It turns out that the iterative and cepstral techniques yield reconstructions with a smaller bias than the Fourier method. The three techniques, however, have identical variance profiles, and their consistency increases linearly as a function of the signal-to-noise ratio.
Resumo:
We present a signal processing approach using discrete wavelet transform (DWT) for the generation of complex synthetic aperture radar (SAR) images at an arbitrary number of dyadic scales of resolution. The method is computationally efficient and is free from significant system-imposed limitations present in traditional subaperture-based multiresolution image formation. Problems due to aliasing associated with biorthogonal decomposition of the complex signals are addressed. The lifting scheme of DWT is adapted to handle complex signal approximations and employed to further enhance the computational efficiency. Multiresolution SAR images formed by the proposed method are presented.
Resumo:
We derive expressions for convolution multiplication properties of discrete cosine transform II (DCT II) starting from equivalent discrete Fourier transform (DFT) representations. Using these expressions, a method for implementing linear filtering through block convolution in the DCT II domain is presented. For the case of nonsymmetric impulse response, additional discrete sine transform II (DST II) is required for implementing the filter in DCT II domain, where as for a symmetric impulse response, the additional transform is not required. Comparison with recently proposed circular convolution technique in DCT II domain shows that the proposed new method is computationally more efficient.
Resumo:
Localised prostate cancer is a heterogenous disease and a multi-modal approach is required to accurately diagnose and stage the disease. Whilst the use of magnetic resonance imaging (MRI) has become more common, small volume and multi-focal disease are oft en diffi cult to characterise. Prostate specifi c membrane antigen is a cell surface protein, which is expressed in nearly all prostate cancer cells. Its expression is signifi cantly higher in high grade prostate cancer cells. In this study, we compare multi-parametric magnetic resonance imaging and 68-Gallinium-PSMA PET with whole-mount pathology of the prostate to evaluate the applicability of multiparameteric (MP) MRI and 68Ga-PSMA PET in detecting and locating tumour foci in patients with localised prostate cancer.
Resumo:
Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.
Resumo:
We propose certain discrete parameter variants of well known simulation optimization algorithms. Two of these algorithms are based on the smoothed functional (SF) technique while two others are based on the simultaneous perturbation stochastic approximation (SPSA) method. They differ from each other in the way perturbations are obtained and also the manner in which projections and parameter updates are performed. All our algorithms use two simulations and two-timescale stochastic approximation. As an application setting, we consider the important problem of admission control of packets in communication networks under dependent service times. We consider a discrete time slotted queueing model of the system and consider two different scenarios - one where the service times have a dependence on the system state and the other where they depend on the number of arrivals in a time slot. Under our settings, the simulated objective function appears ill-behaved with multiple local minima and a unique global minimum characterized by a sharp dip in the objective function in a small region of the parameter space. We compare the performance of our algorithms on these settings and observe that the two SF algorithms show the best results overall. In fact, in many cases studied, SF algorithms converge to the global minimum.
Resumo:
A Finite Element Method based forward solver is developed for solving the forward problem of a 2D-Electrical Impedance Tomography. The Method of Weighted Residual technique with a Galerkin approach is used for the FEM formulation of EIT forward problem. The algorithm is written in MatLAB7.0 and the forward problem is studied with a practical biological phantom developed. EIT governing equation is numerically solved to calculate the surface potentials at the phantom boundary for a uniform conductivity. An EIT-phantom is developed with an array of 16 electrodes placed on the inner surface of the phantom tank filled with KCl solution. A sinusoidal current is injected through the current electrodes and the differential potentials across the voltage electrodes are measured. Measured data is compared with the differential potential calculated for known current and solution conductivity. Comparing measured voltage with the calculated data it is attempted to find the sources of errors to improve data quality for better image reconstruction.
Resumo:
The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances. (C) 2010 Optical Society of America
Resumo:
We describe a noniterative method for recovering optical absorption coefficient distribution from the absorbed energy map reconstructed using simulated and noisy boundary pressure measurements. The source reconstruction problem is first solved for the absorbed energy map corresponding to single- and multiple-source illuminations from the side of the imaging plane. It is shown that the absorbed energy map and the absorption coefficient distribution, recovered from the single-source illumination with a large variation in photon flux distribution, have signal-to-noise ratios comparable to those of the reconstructed parameters from a more uniform photon density distribution corresponding to multiple-source illuminations. The absorbed energy map is input as absorption coefficient times photon flux in the time-independent diffusion equation (DE) governing photon transport to recover the photon flux in a single step. The recovered photon flux is used to compute the optical absorption coefficient distribution from the absorbed energy map. In the absence of experimental data, we obtain the boundary measurements through Monte Carlo simulations, and we attempt to address the possible limitations of the DE model in the overall reconstruction procedure.
Resumo:
The aim of this thesis was to study the seismic tomography structure of the earth s crust together with earthquake distribution and mechanism beneath the central Fennoscandian Shield, mainly in southern and central Finland. The earthquake foci and some fault plane solutions are correlated with 3-D images of the velocity tomography. The results are discussed in relation to the stress field of the Shield and with other geophysical, e.g. geomagnetic, gravimetric, tectonic, and anisotropy studies of the Shield. The earthquake data of the Fennoscandian Shield has been extracted from the Nordic earthquake parameter data base which was founded at the time of inception of the earthquake catalogue for northern Europe. Eight earlier earthquake source mechanisms are included in a pilot study on creating a novel technique for calculating an earthquake fault plane solution. Altogether, eleven source mechanisms of shallow, weak earthquakes are related in the 3-D tomography model to trace stresses of the crust in southern and central Finland. The earthquakes in the eastern part of the Fennoscandian Shield represent low-active, intraplate seismicity. Earthquake mechanisms with NW-SE oriented horizontal compression confirm that the dominant stress field originates from the ridge-push force in the North Atlantic Ocean. Earthquakes accumulate in coastal areas, in intersections of tectonic lineaments, in main fault zones or are bordered by fault lines. The majority of Fennoscandian earthquakes concentrate on the south-western Shield in southern Norway and Sweden. Onwards, epicentres spread via the ridge of the Shield along the west-coast of the Gulf of Bothnia northwards along the Tornio River - Finnmark fault system to the Barents Sea, and branch out north-eastwards via the Kuusamo region to the White Sea Kola Peninsula faults. The local seismic tomographic method was applied to find the terrane distribution within the central parts of the Shield the Svecofennian Orogen. From 300 local explosions a total of 19765 crustal Pg- and Sg-wave arrival times were inverted to create independent 3-D Vp and Vs tomographic models, from which the Vp/Vs ratio was calculated. The 3-D structure of the crust is presented as a P-wave and for the first time as an S-wave velocity model, and also as a Vp/Vs-ratio model of the SVEKALAPKO area that covers 700x800 km2 in southern and central Finland. Also, some P-wave Moho-reflection data was interpolated to image the relief of the crust-mantle boundary (i.e. Moho). In the tomography model, the seismic velocities vary smoothly. The lateral variations are larger for Vp (dVp =0.7 km/s) than for Vs (dVs =0.4 km/s). The Vp/Vs ratio varies spatially more distinctly than P- and S-wave velocities, usually from 1.70 to 1.74 in the upper crust and from 1.72 to 1.78 in the lower crust. Schist belts and their continuations at depth are associated with lower velocities and lower Vp/Vs ratios than in the granitoid areas. The tomography modelling suggests that the Svecofennian Orogen was accreted from crustal blocks ranging in size from 100x100 km2 to 200x200 km2 in cross-sectional area. The intervening sedimentary belts have ca. 0.2 km/s lower P- and S-wave velocities and ca. 0.04 lower Vp/Vs ratios. Thus, the tomographic model supports the concept that the thick Svecofennian crust was accreted from several crustal terranes, some hidden, and that the crust was later modified by intra- and underplating. In conclusion, as a novel approach the earthquake focal mechanism and focal depth distribution is discussed in relation to the 3-D tomography model. The schist belts and the transformation zones between the high- and low-velocity anomaly blocks are characterized by deeper earthquakes than the granitoid areas where shallow events dominate. Although only a few focal mechanisms were solved for southern Finland, there is a trend towards strike-slip and oblique strike-slip movements inside schist areas. The normal dip-slip type earthquakes are typical in the seismically active Kuusamo district in the NE edge of the SVEKALAPKO area, where the Archean crust is ca. 15-20 km thinner than the Proterozoic Svecofennian crust. Two near vertical dip-slip mechanism earthquakes occurred in the NE-SW junction between the Central Finland Granitoid Complex and the Vyborg rapakivi batholith, where high Vp/Vs-ratio deep-set intrusion splits the southern Finland schist belt into two parts in the tomography model.
Resumo:
Instability in conventional haptic rendering destroys the perception of rigid objects in virtual environments. Inherent limitations in the conventional haptic loop restrict the maximum stiffness that can be rendered. In this paper we present a method to render virtual walls that are much stiffer than those achieved by conventional techniques. By removing the conventional digital haptic loop and replacing it with a part-continuous and part-discrete time hybrid haptic loop, we were able to render stiffer walls. The control loop is implemented as a combinational logic circuit on an field-programmable gate array. We compared the performance of the conventional haptic loop and our hybrid haptic loop on the same haptic device, and present mathematical analysis to show the limit of stability of our device. Our hybrid method removes the computer-intensive haptic loop from the CPU-this can free a significant amount of resources that can be used for other purposes such as graphical rendering and physics modeling. It is our hope that, in the future, similar designs will lead to a haptics processing unit (HPU).