14 resultados para deconvolution

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task, although the minimax rates for pointwise estimation are very slow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blind Deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Many successful image priors enforce the sparsity of the sharp image gradients. Ideally the L0 “norm” is the best choice for promoting sparsity, but because it is computationally intractable, some methods have used a logarithmic approximation. In this work we also study a logarithmic image prior. We show empirically how well the prior suits the blind deconvolution problem. Our analysis confirms experimentally the hypothesis that a prior should not necessarily model natural image statistics to correctly estimate the blur kernel. Furthermore, we show that a simple Maximum a Posteriori formulation is enough to achieve state of the art results. To minimize such formulation we devise two iterative minimization algorithms that cope with the non-convexity of the logarithmic prior: one obtained via the primal-dual approach and one via majorization-minimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a solution to blind deconvolution of a scene with two layers (foreground/background). We show that the reconstruction of the support of these two layers from a single image of a conventional camera is not possible. As a solution we propose to use a light field camera. We demonstrate that a single light field image captured with a Lytro camera can be successfully deblurred. More specifically, we consider the case of space-varying motion blur, where the blur magnitude depends on the depth changes in the scene. Our method employs a layered model that handles occlusions and partial transparencies due to both motion blur and out of focus blur of the plenoptic camera. We reconstruct each layer support, the corresponding sharp textures, and motion blurs via an optimization scheme. The performance of our algorithm is demonstrated on synthetic as well as real light field images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today electronic portal imaging devices (EPID's) are used primarily to verify patient positioning. They have, however, also the potential as 2D-dosimeters and could be used as such for transit dosimetry or dose reconstruction. It has been proven that such devices, especially liquid filled ionization chambers, have a stable dose response relationship which can be described in terms of the physical properties of the EPID and the pulsed linac radiation. For absolute dosimetry however, an accurate method of calibration to an absolute dose is needed. In this work, we concentrate on calibration against dose in a homogeneous water phantom. Using a Monte Carlo model of the detector we calculated dose spread kernels in units of absolute dose per incident energy fluence and compared them to calculated dose spread kernels in water at different depths. The energy of the incident pencil beams varied between 0.5 and 18 MeV. At the depth of dose maximum in water for a 6 MV beam (1.5 cm) and for a 18 MV beam (3.0 cm) we observed large absolute differences between water and detector dose above an incident energy of 4 MeV but only small relative differences in the most frequent energy range of the beam energy spectra. It is shown that for a 6 MV beam the absolute reference dose measured at 1.5 cm water depth differs from the absolute detector dose by 3.8%. At depth 1.2 cm in water, however, the relative dose differences are almost constant between 2 and 6 MeV. The effects of changes in the energy spectrum of the beam on the dose responses in water and in the detector are also investigated. We show that differences larger than 2% can occur for different beam qualities of the incident photon beam behind water slabs of different thicknesses. It is therefore concluded that for high-precision dosimetry such effects have to be taken into account. Nevertheless, the precise information about the dose response of the detector provided in this Monte Carlo study forms the basis of extracting directly the basic radiometric quantities photon fluence and photon energy fluence from the detector's signal using a deconvolution algorithm. The results are therefore promising for future application in absolute transit dosimetry and absolute dose reconstruction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In diacetylmorphine prescription programs for heavily dependent addicts, diacetylmorphine is usually administered intravenously, but this may not be possible due to venosclerosis or when heroin abuse had occurred via non-intravenous routes. Since up to 25% of patients administer diacetylmorphine orally, we characterised morphine absorption after single oral doses of immediate and extended release diacetylmorphine in 8 opioid addicts. Plasma concentrations were determined by liquid chromatography-mass spectrometry. Non-compartmental methods and deconvolution were applied for data analysis. Mean (+/-S.D.) immediate and extended release doses were 719+/-297 and 956+/-404 mg, with high absolute morphine bioavailabilities of 56-61%, respectively. Immediate release diacetylmorphine caused rapid morphine absorption, peaking at 10-15 min. Morphine absorption was considerably slower and more sustained for extended release diacetylmorphine, with only approximately 30% of maximal immediate release absorption being reached after 10 min and maintained for 3-4h, with no relevant food interaction. The relative extended to immediate release bioavailability was calculated to be 86% by non-compartmental analysis and 93% by deconvolution analysis. Thus, immediate and extended release diacetylmorphine produce the intended morphine exposures. Both are suitable for substitution treatments. Similar doses can be applied if used in combination or sequentially.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fenofibrate, widely used for the treatment of dyslipidemia, activates the nuclear receptor, peroxisome proliferator-activated receptor alpha. However, liver toxicity, including liver cancer, occurs in rodents treated with fibrate drugs. Marked species differences occur in response to fibrate drugs, especially between rodents and humans, the latter of which are resistant to fibrate-induced cancer. Fenofibrate metabolism, which also shows species differences, has not been fully determined in humans and surrogate primates. In the present study, the metabolism of fenofibrate was investigated in cynomolgus monkeys by ultraperformance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOFMS)-based metabolomics. Urine samples were collected before and after oral doses of fenofibrate. The samples were analyzed in both positive-ion and negative-ion modes by UPLC-QTOFMS, and after data deconvolution, the resulting data matrices were subjected to multivariate data analysis. Pattern recognition was performed on the retention time, mass/charge ratio, and other metabolite-related variables. Synthesized or purchased authentic compounds were used for metabolite identification and structure elucidation by liquid chromatographytandem mass spectrometry. Several metabolites were identified, including fenofibric acid, reduced fenofibric acid, fenofibric acid ester glucuronide, reduced fenofibric acid ester glucuronide, and compound X. Another two metabolites (compound B and compound AR), not previously reported in other species, were characterized in cynomolgus monkeys. More importantly, previously unknown metabolites, fenofibric acid taurine conjugate and reduced fenofibric acid taurine conjugate were identified, revealing a previously unrecognized conjugation pathway for fenofibrate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of acute stroke treatment with intravenous thrombolysis or endovascular recanalization techniques is to rescue the penumbral tissue. Therefore, knowing the factors that influence the loss of penumbral tissue is of major interest. In this study we aimed to identify factors that determine the evolution of the penumbra in patients with proximal (M1 or M2) middle cerebral artery occlusion. Among these factors collaterals as seen on angiography were of special interest. Forty-four patients were included in this analysis. They had all received endovascular therapy and at least minimal reperfusion was achieved. Their penumbra was assessed with perfusion- and diffusion-weighted imaging. Perfusion-weighted imaging volumes were defined by circular singular value decomposition deconvolution maps (Tmax > 6 s) and results were compared with volumes obtained with non-deconvolved maps (time to peak > 4 s). Loss of penumbral volume was defined as difference of post- minus pretreatment diffusion-weighted imaging volumes and calculated in per cent of pretreatment penumbral volume. Correlations between baseline characteristics, reperfusion, collaterals, time to reperfusion and penumbral volume loss were assessed using analysis of covariance. Collaterals (P = 0.021), reperfusion (P = 0.003) and their interaction (P = 0.031) independently influenced penumbral tissue loss, but not time from magnetic resonance (P = 0.254) or from symptom onset (P = 0.360) to reperfusion. Good collaterals markedly slowed down and reduced the penumbra loss: in patients with thrombolysis in cerebral infarction 2 b-3 reperfusion and without any haemorrhage, 27% of the penumbra was lost with 8.9 ml/h with grade 0 collaterals, whereas 11% with 3.4 ml/h were lost with grade 1 collaterals. With grade 2 collaterals the penumbral volume change was -2% with -1.5 ml/h, indicating an overall diffusion-weighted imaging lesion reversal. We conclude that collaterals and reperfusion are the main factors determining loss of penumbral tissue in patients with middle cerebral artery occlusions. Collaterals markedly reduce and slow down penumbra loss. In patients with good collaterals, time to successful reperfusion accounts only for a minor fraction of penumbra loss. These results support the hypothesis that good collaterals extend the time window for acute stroke treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an image quality assessment and enhancement method for high-resolution Fourier-Domain OCT imaging like in sub-threshold retina therapy. A Maximum-Likelihood deconvolution algorithm as well as a histogram-based quality assessment method are evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The stability of terrestrial carbon reservoirs is thought to be closely linked to variations in climate 1, but the magnitude of carbon–climate feedbacks has proved dificult to constrain for both modern 2–4 and millennial 5–13 timescales. Reconstructions of atmospheric CO2 concentrations for the past thousand years have shown fluctuations on multidecadal to centennial timescales 5–7, but the causes of these fluctuations are unclear. Here we report high-resolution carbon isotope measurements of CO2 trapped within the ice of the West Antarctic Ice Sheet Divide ice core for the past 1,000 years. We use a deconvolution approach 14 to show that changes in terrestrial organic carbon stores best explain the observed multidecadal variations in the 13 C of CO2 and in CO2 concentrations from 755 to 1850 CE. If significant long-term carbon emissions came from pre-industrial anthropogenic land-use changes over this interval, the emissions must have been offset by a natural terrestrial sink for 13 C-depleted carbon, such as peatlands. We find that on multidecadal timescales, carbon cycle changes seem to vary with reconstructed regional climate changes. We conclude that climate variability could be an important control of fluctuations in land carbon storage on these timescales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During time-resolved optical stimulation experiments (TR-OSL), one uses short light pulses to separate the stimulation and emission of luminescence in time. Experimental TR-OSL results show that the luminescence lifetime in quartz of sedimentary origin is independent of annealing temperature below 500 °C, but decreases monotonically thereafter. These results have been interpreted previously empirically on the basis of the existence of two separate luminescence centers LH and LL in quartz, each with its own distinct luminescence lifetime. Additional experimental evidence also supports the presence of a non-luminescent hole reservoir R, which plays a critical role in the predose effect in this material. This paper extends a recently published analytical model for thermal quenching in quartz, to include the two luminescence centers LH and LL, as well as the hole reservoir R. The new extended model involves localized electronic transitions between energy states within the two luminescence centers, and is described by a system of differential equations based on the Mott–Seitz mechanism of thermal quenching. It is shown that by using simplifying physical assumptions, one can obtain analytical solutions for the intensity of the light during a TR-OSL experiment carried out with previously annealed samples. These analytical expressions are found to be in good agreement with the numerical solutions of the equations. The results from the model are shown to be in quantitative agreement with published experimental data for commercially available quartz samples. Specifically the model describes the variation of the luminescence lifetimes with (a) annealing temperatures between room temperature and 900 °C, and (b) with stimulation temperatures between 20 and 200 °C. This paper also reports new radioluminescence (RL) measurements carried out using the same commercially available quartz samples. Gaussian deconvolution of the RL emission spectra was carried out using a total of seven emission bands between 1.5 and 4.5 eV, and the behavior of these bands was examined as a function of the annealing temperature. An emission band at ∼3.44 eV (360 nm) was found to be strongly enhanced when the annealing temperature was increased to 500 °C, and this band underwent a significant reduction in intensity with further increase in temperature. Furthermore, a new emission band at ∼3.73 eV (330 nm) became apparent for annealing temperatures in the range 600–700 °C. These new experimental results are discussed within the context of the model presented in this paper.