35 resultados para Macadamia kernel
Resumo:
This paper proposed an automated three-dimensional (3D) lumbar intervertebral disc (IVD) segmentation strategy from Magnetic Resonance Imaging (MRI) data. Starting from two user supplied landmarks, the geometrical parameters of all lumbar vertebral bodies and intervertebral discs are automatically extracted from a mid-sagittal slice using a graphical model based template matching approach. Based on the estimated two-dimensional (2D) geometrical parameters, a 3D variable-radius soft tube model of the lumbar spine column is built by model fitting to the 3D data volume. Taking the geometrical information from the 3D lumbar spine column as constraints and segmentation initialization, the disc segmentation is achieved by a multi-kernel diffeomorphic registration between a 3D template of the disc and the observed MRI data. Experiments on 15 patient data sets showed the robustness and the accuracy of the proposed algorithm.
Resumo:
Blind Deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Many successful image priors enforce the sparsity of the sharp image gradients. Ideally the L0 “norm” is the best choice for promoting sparsity, but because it is computationally intractable, some methods have used a logarithmic approximation. In this work we also study a logarithmic image prior. We show empirically how well the prior suits the blind deconvolution problem. Our analysis confirms experimentally the hypothesis that a prior should not necessarily model natural image statistics to correctly estimate the blur kernel. Furthermore, we show that a simple Maximum a Posteriori formulation is enough to achieve state of the art results. To minimize such formulation we devise two iterative minimization algorithms that cope with the non-convexity of the logarithmic prior: one obtained via the primal-dual approach and one via majorization-minimization.
Resumo:
This package includes various Mata functions. kern(): various kernel functions; kint(): kernel integral functions; kdel0(): canonical bandwidth of kernel; quantile(): quantile function; median(): median; iqrange(): inter-quartile range; ecdf(): cumulative distribution function; relrank(): grade transformation; ranks(): ranks/cumulative frequencies; freq(): compute frequency counts; histogram(): produce histogram data; mgof(): multinomial goodness-of-fit tests; collapse(): summary statistics by subgroups; _collapse(): summary statistics by subgroups; gini(): Gini coefficient; sample(): draw random sample; srswr(): SRS with replacement; srswor(): SRS without replacement; upswr(): UPS with replacement; upswor(): UPS without replacement; bs(): bootstrap estimation; bs2(): bootstrap estimation; bs_report(): report bootstrap results; jk(): jackknife estimation; jk_report(): report jackknife results; subset(): obtain subsets, one at a time; composition(): obtain compositions, one by one; ncompositions(): determine number of compositions; partition(): obtain partitions, one at a time; npartitionss(): determine number of partitions; rsubset(): draw random subset; rcomposition(): draw random composition; colvar(): variance, by column; meancolvar(): mean and variance, by column; variance0(): population variance; meanvariance0(): mean and population variance; mse(): mean squared error; colmse(): mean squared error, by column; sse(): sum of squared errors; colsse(): sum of squared errors, by column; benford(): Benford distribution; cauchy(): cumulative Cauchy-Lorentz dist.; cauchyden(): Cauchy-Lorentz density; cauchytail(): reverse cumulative Cauchy-Lorentz; invcauchy(): inverse cumulative Cauchy-Lorentz; rbinomial(): generate binomial random numbers; cebinomial(): cond. expect. of binomial r.v.; root(): Brent's univariate zero finder; nrroot(): Newton-Raphson zero finder; finvert(): univariate function inverter; integrate_sr(): univariate function integration (Simpson's rule); integrate_38(): univariate function integration (Simpson's 3/8 rule); ipolate(): linear interpolation; polint(): polynomial inter-/extrapolation; plot(): Draw twoway plot; _plot(): Draw twoway plot; panels(): identify nested panel structure; _panels(): identify panel sizes; npanels(): identify number of panels; nunique(): count number of distinct values; nuniqrows(): count number of unique rows; isconstant(): whether matrix is constant; nobs(): number of observations; colrunsum(): running sum of each column; linbin(): linear binning; fastlinbin(): fast linear binning; exactbin(): exact binning; makegrid(): equally spaced grid points; cut(): categorize data vector; posof(): find element in vector; which(): positions of nonzero elements; locate(): search an ordered vector; hunt(): consecutive search; cond(): matrix conditional operator; expand(): duplicate single rows/columns; _expand(): duplicate rows/columns in place; repeat(): duplicate contents as a whole; _repeat(): duplicate contents in place; unorder2(): stable version of unorder(); jumble2(): stable version of jumble(); _jumble2(): stable version of _jumble(); pieces(): break string into pieces; npieces(): count number of pieces; _npieces(): count number of pieces; invtokens(): reverse of tokens(); realofstr(): convert string into real; strexpand(): expand string argument; matlist(): display a (real) matrix; insheet(): read spreadsheet file; infile(): read free-format file; outsheet(): write spreadsheet file; callf(): pass optional args to function; callf_setup(): setup for mm_callf().
Resumo:
Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.
Resumo:
The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.