993 resultados para iterative methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To compare image quality of a standard-dose (SD) and a low-dose (LD) cervical spine CT protocol using filtered back-projection (FBP) and iterative reconstruction (IR). MATERIALS AND METHODS: Forty patients investigated by cervical spine CT were prospectively randomised into two groups: SD (120 kVp, 275 mAs) and LD (120 kVp, 150 mAs), both applying automatic tube current modulation. Data were reconstructed using both FBP and sinogram-affirmed IR. Image noise, signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were measured. Two radiologists independently and blindly assessed the following anatomical structures at C3-C4 and C6-C7 levels, using a four-point scale: intervertebral disc, content of neural foramina and dural sac, ligaments, soft tissues and vertebrae. They subsequently rated overall image quality using a ten-point scale. RESULTS: For both protocols and at each disc level, IR significantly decreased image noise and increased SNR and CNR, compared with FBP. SNR and CNR were statistically equivalent in LD-IR and SD-FBP protocols. Regardless of the dose and disc level, the qualitative scores with IR compared with FBP, and with LD-IR compared with SD-FBP, were significantly higher or not statistically different for intervertebral discs, neural foramina and ligaments, while significantly lower or not statistically different for soft tissues and vertebrae. The overall image quality scores were significantly higher with IR compared with FBP, and with LD-IR compared with SD-FBP. CONCLUSION: LD-IR cervical spine CT provides better image quality for intervertebral discs, neural foramina and ligaments, and worse image quality for soft tissues and vertebrae, compared with SD-FBP, while reducing radiation dose by approximately 40 %.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The potential effects of ionizing radiation are of particular concern in children. The model-based iterative reconstruction VEO(TM) is a technique commercialized to improve image quality and reduce noise compared with the filtered back-projection (FBP) method. OBJECTIVE: To evaluate the potential of VEO(TM) on diagnostic image quality and dose reduction in pediatric chest CT examinations. MATERIALS AND METHODS: Twenty children (mean 11.4 years) with cystic fibrosis underwent either a standard CT or a moderately reduced-dose CT plus a minimum-dose CT performed at 100 kVp. Reduced-dose CT examinations consisted of two consecutive acquisitions: one moderately reduced-dose CT with increased noise index (NI = 70) and one minimum-dose CT at CTDIvol 0.14 mGy. Standard CTs were reconstructed using the FBP method while low-dose CTs were reconstructed using FBP and VEO. Two senior radiologists evaluated diagnostic image quality independently by scoring anatomical structures using a four-point scale (1 = excellent, 2 = clear, 3 = diminished, 4 = non-diagnostic). Standard deviation (SD) and signal-to-noise ratio (SNR) were also computed. RESULTS: At moderately reduced doses, VEO images had significantly lower SD (P < 0.001) and higher SNR (P < 0.05) in comparison to filtered back-projection images. Further improvements were obtained at minimum-dose CT. The best diagnostic image quality was obtained with VEO at minimum-dose CT for the small structures (subpleural vessels and lung fissures) (P < 0.001). The potential for dose reduction was dependent on the diagnostic task because of the modification of the image texture produced by this reconstruction. CONCLUSIONS: At minimum-dose CT, VEO enables important dose reduction depending on the clinical indication and makes visible certain small structures that were not perceptible with filtered back-projection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To combine weighted iterative reconstruction with self-navigated free-breathing coronary magnetic resonance angiography for retrospective reduction of respiratory motion artifacts. METHODS: One-dimensional self-navigation was improved for robust respiratory motion detection and the consistency of the acquired data was estimated on the detected motion. Based on the data consistency, the data fidelity term of iterative reconstruction was weighted to reduce the effects of respiratory motion. In vivo experiments were performed in 14 healthy volunteers and the resulting image quality of the proposed method was compared to a navigator-gated reference in terms of acquisition time, vessel length, and sharpness. RESULT: Although the sampling pattern of the proposed method contained 60% more samples with respect to the reference, the scan efficiency was improved from 39.5 ± 10.1% to 55.1 ± 9.1%. The improved self-navigation showed a high correlation to the standard navigator signal and the described weighting efficiently reduced respiratory motion artifacts. Overall, the average image quality of the proposed method was comparable to the navigator-gated reference. CONCLUSION: Self-navigated coronary magnetic resonance angiography was successfully combined with weighted iterative reconstruction to reduce the total acquisition time and efficiently suppress respiratory motion artifacts. The simplicity of the experimental setup and the promising image quality are encouraging toward future clinical evaluation. Magn Reson Med 73:1885-1895, 2015. © 2014 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Master’s Thesis examines knowledge creation and transfer processes in an iterative project environment. The aim is to understand how knowledge is created and transferred during an actual iterative implementation project which takes place in International Business Machines (IBM). The second aim is to create and develop new working methods that support more effective knowledge creation and transfer for future iterative implementation projects. The research methodology in this thesis is qualitative. Using focus group interviews as a research method provides qualitative information and introduces the experiences of the individuals participating in the project. This study found that the following factors affect knowledge creation and transfer in an iterative, multinational, and multi-organizational implementation project: shared vision and common goal, trust, open communication, social capital, and network density. All of these received both theoretical and empirical support. As for future projects, strengthening these factors was found to be the key for more effective knowledge creation and transfer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s world because of the rapid advancement in the field of technology and business, the requirements are not clear, and they are changing continuously in the development process. Due to those changes in the requirements the software development becomes very difficult. Use of traditional software development methods such as waterfall method is not a good option, as the traditional software development methods are not flexible to requirements and the software can be late and over budget. For developing high quality software that satisfies the customer, the organizations can use software development methods, such as agile methods which are flexible to change requirements at any stage in the development process. The agile methods are iterative and incremental methods that can accelerate the delivery of the initial business values through the continuous planning and feedback, and there is close communication between the customer and developers. The main purpose of the current thesis is to find out the problems in traditional software development and to show how agile methods reduced those problems in software development. The study also focuses the different success factors of agile methods, the success rate of agile projects and comparison between traditional and agile software development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bossel's (2001) systems-based approach for deriving comprehensive indicator sets provides one of the most holistic frameworks for developing sustainability indicators. It ensures that indicators cover all important aspects of system viability, performance, and sustainability, and recognizes that a system cannot be assessed in isolation from the systems upon which it depends and which in turn depend upon it. In this reply, we show how Bossel's approach is part of a wider convergence toward integrating participatory and reductionist approaches to measure progress toward sustainable development. However, we also show that further integration of these approaches may be able to improve the accuracy and reliability of indicators to better stimulate community learning and action. Only through active community involvement can indicators facilitate progress toward sustainable development goals. To engage communities effectively in the application of indicators, these communities must be actively involved in developing, and even in proposing, indicators. The accuracy, reliability, and sensitivity of the indicators derived from local communities can be ensured through an iterative process of empirical and community evaluation. Communities are unlikely to invest in measuring sustainability indicators unless monitoring provides immediate and clear benefits. However, in the context of goals, targets, and/or baselines, sustainability indicators can more effectively contribute to a process of development that matches local priorities and engages the interests of local people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider conjugate-gradient like methods for solving block symmetric indefinite linear systems that arise from saddle-point problems or, in particular, regularizations thereof. Such methods require preconditioners that preserve certain sub-blocks from the original systems but allow considerable flexibility for the remaining blocks. We construct a number of families of implicit factorizations that are capable of reproducing the required sub-blocks and (some) of the remainder. These generalize known implicit factorizations for the unregularized case. Improved eigenvalue clustering is possible if additionally some of the noncrucial blocks are reproduced. Numerical experiments confirm that these implicit-factorization preconditioners can be very effective in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An alternative approach to research is described that has been developed through a succession of significant construction management research projects. The approach follows the principles of iterative grounded theory, whereby researchers iterate between alternative theoretical frameworks and emergent empirical data. Of particular importance is an orientation toward mixing methods, thereby overcoming the existing tendency to dichotomize quantitative and qualitative approaches. The approach is positioned against the existing contested literature on grounded theory, and the possibility of engaging with empirical data in a “theory free” manner is discounted. Emphasis instead is given to the way in which researchers must be theoretically sensitive as a result of being steeped in relevant literatures. Knowledge of existing literatures therefore shapes the initial research design; but emergent empirical findings cause fresh theoretical perspectives to be mobilized. The advocated approach is further aligned with notions of knowledge coproduction and the underlying principles of contextualist research. It is this unique combination of ideas which characterizes the paper's contribution to the research methodology literature within the field of construction management. Examples are provided and consideration is given to the extent to which the emergent findings are generalizable beyond the specific context from which they are derived.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we address decomposition strategies especially tailored to perform strong coupling of dimensionally heterogeneous models, under the hypothesis that one wants to solve each submodel separately and implement the interaction between subdomains by boundary conditions alone. The novel methodology takes full advantage of the small number of interface unknowns in this kind of problems. Existing algorithms can be viewed as variants of the `natural` staggered algorithm in which each domain transfers function values to the other, and receives fluxes (or forces), and vice versa. This natural algorithm is known as Dirichlet-to-Neumann in the Domain Decomposition literature. Essentially, we propose a framework in which this algorithm is equivalent to applying Gauss-Seidel iterations to a suitably defined (linear or nonlinear) system of equations. It is then immediate to switch to other iterative solvers such as GMRES or other Krylov-based method. which we assess through numerical experiments showing the significant gain that can be achieved. indeed. the benefit is that an extremely flexible, automatic coupling strategy can be developed, which in addition leads to iterative procedures that are parameter-free and rapidly converging. Further, in linear problems they have the finite termination property. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision.  Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes.  The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).