13 resultados para Iteration

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the well-known method of frames approach to the signal decomposition problem is reformulated as a certain bilevel goal-attainment linear least squares problem. As a consequence, a numerically robust variant of the method, named approximating method of frames, is proposed on the basis of a certain minimal Euclidean norm approximating splitting pseudo-iteration-wise method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconstruction of shape and intensity from 2D x-ray images has drawn more and more attentions. Previously introduced work suffers from the long computing time due to its iterative optimization characteristics and the requirement of generating digitally reconstructed radiographs within each iteration. In this paper, we propose a novel method which uses a patient-specific 3D surface model reconstructed from 2D x-ray images as a surrogate to get a patient-specific volumetric intensity reconstruction via partial least squares regression. No DRR generation is needed. The method was validated on 20 cadaveric proximal femurs by performing a leave-one-out study. Qualitative and quantitative results demonstrated the efficacy of the present method. Compared to the existing work, the present method has the advantage of much shorter computing time and can be applied to both DXA images as well as conventional x-ray images, which may hold the potentials to be applied to clinical routine task such as total hip arthroplasty (THA).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the issue of matching statistical and non-rigid shapes, and introduces an Expectation Conditional Maximization-based deformable shape registration (ECM-DSR) algorithm. Similar to previous works, we cast the statistical and non-rigid shape registration problem into a missing data framework and handle the unknown correspondences with Gaussian Mixture Models (GMM). The registration problem is then solved by fitting the GMM centroids to the data. But unlike previous works where equal isotropic covariances are used, our new algorithm uses heteroscedastic covariances whose values are iteratively estimated from the data. A previously introduced virtual observation concept is adopted here to simplify the estimation of the registration parameters. Based on this concept, we derive closed-form solutions to estimate parameters for statistical or non-rigid shape registrations in each iteration. Our experiments conducted on synthesized and real data demonstrate that the ECM-DSR algorithm has various advantages over existing algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE To determine the image quality of an iterative reconstruction (IR) technique in low-dose MDCT (LDCT) of the chest of immunocompromised patients in an intraindividual comparison to filtered back projection (FBP) and to evaluate the dose reduction capability. MATERIALS AND METHODS 30 chest LDCT scans were performed in immunocompromised patients (Brilliance iCT; 20-40 mAs; mean CTDIvol: 1.7 mGy). The raw data were reconstructed using FBP and the IR technique (iDose4™, Philips, Best, The Netherlands) set to seven iteration levels. 30 routine-dose MDCT (RDCT) reconstructed with FBP served as controls (mean exposure: 116 mAs; mean CDTIvol: 7.6 mGy). Three blinded radiologists scored subjective image quality and lesion conspicuity. Quantitative parameters including CT attenuation and objective image noise (OIN) were determined. RESULTS In LDCT high iDose4™ levels lead to a significant decrease in OIN (FBP vs. iDose7: subscapular muscle 139.4 vs. 40.6 HU). The high iDose4™ levels provided significant improvements in image quality and artifact and noise reduction compared to LDCT FBP images. The conspicuity of subtle lesions was limited in LDCT FBP images. It significantly improved with high iDose4™ levels (> iDose4). LDCT with iDose4™ level 6 was determined to be of equivalent image quality as RDCT with FBP. CONCLUSION iDose4™ substantially improves image quality and lesion conspicuity and reduces noise in low-dose chest CT. Compared to RDCT, high iDose4™ levels provide equivalent image quality in LDCT, hence suggesting a potential dose reduction of almost 80%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE This study aimed to develop a pathway to bring together current UK legislation, good clinical practice and appropriate management strategies that could be applied across a range of healthcare settings. METHODS The pathway was constructed by a multidisciplinary clinical team based in a busy Memory Assessment Service. A process of successive iteration was used to develop the pathway, with input and refinement provided via survey and small group meetings with individuals from a wide range of regional clinical networks and diverse clinical backgrounds as well as discussion with mobility centres and Forum of Mobility Centres, UK. RESULTS We present a succinct clinical pathway for patients with dementia, which provides a decision-making framework for how health professionals across a range of disciplines deal with patients with dementia who drive. CONCLUSIONS By integrating the latest guidance from diverse roles within older people's health services and key experts in the field, the resulting pathway reflects up-to-date policy and encompasses differing perspectives and good practice. It is potentially a generalisable pathway that can be easily adaptable for use internationally, by replacing UK legislation for local regulations. A limitation of this pathway is that it does not address the concern of mild cognitive impairment and how this condition relates to driving safety. © 2014 The Authors. International Journal of Geriatric Psychiatry published by John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two new approaches to quantitatively analyze diffuse diffraction intensities from faulted layer stacking are reported. The parameters of a probability-based growth model are determined with two iterative global optimization methods: a genetic algorithm (GA) and particle swarm optimization (PSO). The results are compared with those from a third global optimization method, a differential evolution (DE) algorithm [Storn & Price (1997). J. Global Optim. 11, 341–359]. The algorithm efficiencies in the early and late stages of iteration are compared. The accuracy of the optimized parameters improves with increasing size of the simulated crystal volume. The wall clock time for computing quite large crystal volumes can be kept within reasonable limits by the parallel calculation of many crystals (clones) generated for each model parameter set on a super- or grid computer. The faulted layer stacking in single crystals of trigonal three-pointedstar- shaped tris(bicylco[2.1.1]hexeno)benzene molecules serves as an example for the numerical computations. Based on numerical values of seven model parameters (reference parameters), nearly noise-free reference intensities of 14 diffuse streaks were simulated from 1280 clones, each consisting of 96 000 layers (reference crystal). The parameters derived from the reference intensities with GA, PSO and DE were compared with the original reference parameters as a function of the simulated total crystal volume. The statistical distribution of structural motifs in the simulated crystals is in good agreement with that in the reference crystal. The results found with the growth model for layer stacking disorder are applicable to other disorder types and modeling techniques, Monte Carlo in particular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function  f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We highlight that the connection of well-foundedness and recursive definitions is more than just convenience. While the consequences of making well-foundedness a sufficient condition for the existence of hierarchies (of various complexity) have been extensively studied, we point out that (if parameters are allowed) well-foundedness is a necessary condition for the existence of hierarchies e.g. that even in an intuitionistic setting (Π01−CA0)α⊢wf(α)where(Π01−CA0)α stands for the iteration of Π01 comprehension (with parameters) along some ordinal α and wf(α) stands for the well-foundedness of α.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate parallel algorithms for the solution of the Navier–Stokes equations in space-time. For periodic solutions, the discretized problem can be written as a large non-linear system of equations. This system of equations is solved by a Newton iteration. The Newton correction is computed using a preconditioned GMRES solver. The parallel performance of the algorithm is illustrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a non-rigid free-from 2D-3D registration approach using statistical deformation model (SDM). In our approach the SDM is first constructed from a set of training data using a non-rigid registration algorithm based on b-spline free-form deformation to encode a priori information about the underlying anatomy. A novel intensity-based non-rigid 2D-3D registration algorithm is then presented to iteratively fit the 3D b-spline-based SDM to the 2D X-ray images of an unseen subject, which requires a computationally expensive inversion of the instantiated deformation in each iteration. In this paper, we propose to solve this challenge with a fast B-spline pseudo-inversion algorithm that is implemented on graphics processing unit (GPU). Experiments conducted on C-arm and X-ray images of cadaveric femurs demonstrate the efficacy of the present approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE To investigate whether the effects of hybrid iterative reconstruction (HIR) on coronary artery calcium (CAC) measurements using the Agatston score lead to changes in assignment of patients to cardiovascular risk groups compared to filtered back projection (FBP). MATERIALS AND METHODS 68 patients (mean age 61.5 years; 48 male; 20 female) underwent prospectively ECG-gated, non-enhanced, cardiac 256-MSCT for coronary calcium scoring. Scanning parameters were as follows: Tube voltage, 120 kV; Mean tube current time-product 63.67 mAs (50 - 150 mAs); collimation, 2 × 128 × 0.625 mm. Images were reconstructed with FBP and with HIR at all levels (L1 to L7). Two independent readers measured Agatston scores of all reconstructions and assigned patients to cardiovascular risk groups. Scores of HIR and FBP reconstructions were correlated (Spearman). Interobserver agreement and variability was assessed with ĸ-statistics and Bland-Altmann-Plots. RESULTS Agatston scores of HIR reconstructions were closely correlated with FBP reconstructions (L1, R = 0.9996; L2, R = 0.9995; L3, R = 0.9991; L4, R = 0.986; L5, R = 0.9986; L6, R = 0.9987; and L7, R = 0.9986). In comparison to FBP, HIR led to reduced Agatston scores between 97 % (L1) and 87.4 % (L7) of the FBP values. Using HIR iterations L1 - L3, all patients were assigned to identical risk groups as after FPB reconstruction. In 5.4 % of patients the risk group after HIR with the maximum iteration level was different from the group after FBP reconstruction. CONCLUSION There was an excellent correlation of Agatston scores after HIR and FBP with identical risk group assignment at levels 1 - 3 for all patients. Hence it appears that the application of HIR in routine calcium scoring does not entail any disadvantages. Thus, future studies are needed to demonstrate whether HIR is a reliable method for reducing radiation dose in coronary calcium scoring.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with parallel optimization of expensive objective functions which are modelled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of q > 0 arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis’ formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batchsequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization.