52 resultados para Minimum Entropy Deconvolution
Resumo:
We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task, although the minimax rates for pointwise estimation are very slow.
Resumo:
Localized Magnetic Resonance Spectroscopy (MRS) is in widespread use for clinical brain research. Standard acquisition sequences to obtain one-dimensional spectra suffer from substantial overlap of spectral contributions from many metabolites. Therefore, specially tuned editing sequences or two-dimensional acquisition schemes are applied to extend the information content. Tuning specific acquisition parameters allows to make the sequences more efficient or more specific for certain target metabolites. Cramér-Rao bounds have been used in other fields for optimization of experiments and are now shown to be very useful as design criteria for localized MRS sequence optimization. The principle is illustrated for one- and two-dimensional MRS, in particular the 2D separation experiment, where the usual restriction to equidistant echo time spacings and equal acquisition times per echo time can be abolished. Particular emphasis is placed on optimizing experiments for quantification of GABA and glutamate. The basic principles are verified by Monte Carlo simulations and in vivo for repeated acquisitions of generalized two-dimensional separation brain spectra obtained from healthy subjects and expanded by bootstrapping for better definition of the quantification uncertainties.
Resumo:
The aim of this work is to elucidate the impact of changes in solar irradiance and energetic particles versus volcanic eruptions on tropospheric global climate during the Dalton Minimum (DM, AD 1780–1840). Separate variations in the (i) solar irradiance in the UV-C with wavelengths λ < 250 nm, (ii) irradiance at wavelengths λ > 250 nm, (iii) in energetic particle spectrum, and (iv) volcanic aerosol forcing were analyzed separately, and (v) in combination, by means of small ensemble calculations using a coupled atmosphere–ocean chemistry–climate model. Global and hemispheric mean surface temperatures show a significant dependence on solar irradiance at λ > 250 nm. Also, powerful volcanic eruptions in 1809, 1815, 1831 and 1835 significantly decreased global mean temperature by up to 0.5 K for 2–3 years after the eruption. However, while the volcanic effect is clearly discernible in the Southern Hemispheric mean temperature, it is less significant in the Northern Hemisphere, partly because the two largest volcanic eruptions occurred in the SH tropics and during seasons when the aerosols were mainly transported southward, partly because of the higher northern internal variability. In the simulation including all forcings, temperatures are in reasonable agreement with the tree ring-based temperature anomalies of the Northern Hemisphere. Interestingly, the model suggests that solar irradiance changes at λ < 250 nm and in energetic particle spectra have only an insignificant impact on the climate during the Dalton Minimum. This downscales the importance of top–down processes (stemming from changes at λ < 250 nm) relative to bottom–up processes (from λ > 250 nm). Reduction of irradiance at λ > 250 nm leads to a significant (up to 2%) decrease in the ocean heat content (OHC) between 0 and 300 m in depth, whereas the changes in irradiance at λ < 250 nm or in energetic particles have virtually no effect. Also, volcanic aerosol yields a very strong response, reducing the OHC of the upper ocean by up to 1.5%. In the simulation with all forcings, the OHC of the uppermost levels recovers after 8–15 years after volcanic eruption, while the solar signal and the different volcanic eruptions dominate the OHC changes in the deeper ocean and prevent its recovery during the DM. Finally, the simulations suggest that the volcanic eruptions during the DM had a significant impact on the precipitation patterns caused by a widening of the Hadley cell and a shift in the intertropical convergence zone.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
We consider the problem of twenty questions with noisy answers, in which we seek to find a target by repeatedly choosing a set, asking an oracle whether the target lies in this set, and obtaining an answer corrupted by noise. Starting with a prior distribution on the target's location, we seek to minimize the expected entropy of the posterior distribution. We formulate this problem as a dynamic program and show that any policy optimizing the one-step expected reduction in entropy is also optimal over the full horizon. Two such Bayes optimal policies are presented: one generalizes the probabilistic bisection policy due to Horstein and the other asks a deterministic set of questions. We study the structural properties of the latter, and illustrate its use in a computer vision application.
Resumo:
Recent findings demonstrate that trees in deserts are efficient carbon sinks. It remains however unknown whether the Clean Development Mechanism will accelerate the planting of trees in Non Annex I dryland countries. We estimated the price of carbon at which a farmer would be indifferent between his customary activity and the planting of trees to trade carbon credits, along an aridity gradient. Carbon yields were simulated by means of the CO2FIX v3.1 model for Pinus halepensis with its respective yield classes along the gradient (Arid – 100mm to Dry Sub Humid conditions – 900mm). Wheat and pasture yields were predicted on somewhat similar nitrogen-based quadratic models, using 30 years of weather data to simulate moisture stress. Stochastic production, input and output prices were afterwards simulated on a Monte Carlo matrix. Results show that, despite the high levels of carbon uptake, carbon trading by afforesting is unprofitable anywhere along the gradient. Indeed, the price of carbon would have to raise unrealistically high, and the certification costs would have to drop significantly, to make the Clean Development Mechanism worthwhile for non annex I dryland countries farmers. From a government agency's point of view the Clean Development Mechanism is attractive. However, such agencies will find it difficult to demonstrate “additionality”, even if the rule may be somewhat flexible. Based on these findings, we will further discuss why the Clean Development Mechanism, a supposedly pro-poor instrument, fails to assist farmers in Non Annex I dryland countries living at minimum subsistence level.
Resumo:
In this article, the realization of a global terrestrial reference system (TRS) based on a consistent combination of Global Navigation Satellite System (GNSS) and Satellite Laser Ranging (SLR) is studied. Our input data consists of normal equation systems from 17 years (1994– 2010) of homogeneously reprocessed GPS, GLONASS and SLR data. This effort used common state of the art reduction models and the same processing software (Bernese GNSS Software) to ensure the highest consistency when combining GNSS and SLR. Residual surface load deformations are modeled with a spherical harmonic approach. The estimated degree-1 surface load coefficients have a strong annual signal for which the GNSS- and SLR-only solutions show very similar results. A combination including these coefficients reduces systematic uncertainties in comparison to the singletechnique solution. In particular, uncertainties due to solar radiation pressure modeling in the coefficient time series can be reduced up to 50 % in the GNSS+SLR solution compared to the GNSS-only solution. In contrast to the ITRF2008 realization, no local ties are used to combine the different geodetic techniques.We combine the pole coordinates as global ties and apply minimum constraints to define the geodetic datum. We show that a common origin, scale and orientation can be reliably realized from our combination strategy in comparison to the ITRF2008.