959 resultados para iterative method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE To investigate whether the effects of hybrid iterative reconstruction (HIR) on coronary artery calcium (CAC) measurements using the Agatston score lead to changes in assignment of patients to cardiovascular risk groups compared to filtered back projection (FBP). MATERIALS AND METHODS 68 patients (mean age 61.5 years; 48 male; 20 female) underwent prospectively ECG-gated, non-enhanced, cardiac 256-MSCT for coronary calcium scoring. Scanning parameters were as follows: Tube voltage, 120 kV; Mean tube current time-product 63.67 mAs (50 - 150 mAs); collimation, 2 × 128 × 0.625 mm. Images were reconstructed with FBP and with HIR at all levels (L1 to L7). Two independent readers measured Agatston scores of all reconstructions and assigned patients to cardiovascular risk groups. Scores of HIR and FBP reconstructions were correlated (Spearman). Interobserver agreement and variability was assessed with ĸ-statistics and Bland-Altmann-Plots. RESULTS Agatston scores of HIR reconstructions were closely correlated with FBP reconstructions (L1, R = 0.9996; L2, R = 0.9995; L3, R = 0.9991; L4, R = 0.986; L5, R = 0.9986; L6, R = 0.9987; and L7, R = 0.9986). In comparison to FBP, HIR led to reduced Agatston scores between 97 % (L1) and 87.4 % (L7) of the FBP values. Using HIR iterations L1 - L3, all patients were assigned to identical risk groups as after FPB reconstruction. In 5.4 % of patients the risk group after HIR with the maximum iteration level was different from the group after FBP reconstruction. CONCLUSION There was an excellent correlation of Agatston scores after HIR and FBP with identical risk group assignment at levels 1 - 3 for all patients. Hence it appears that the application of HIR in routine calcium scoring does not entail any disadvantages. Thus, future studies are needed to demonstrate whether HIR is a reliable method for reducing radiation dose in coronary calcium scoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researchers in ecology commonly use multivariate analyses (e.g. redundancy analysis, canonical correspondence analysis, Mantel correlation, multivariate analysis of variance) to interpret patterns in biological data and relate these patterns to environmental predictors. There has been, however, little recognition of the errors associated with biological data and the influence that these may have on predictions derived from ecological hypotheses. We present a permutational method that assesses the effects of taxonomic uncertainty on the multivariate analyses typically used in the analysis of ecological data. The procedure is based on iterative randomizations that randomly re-assign non identified species in each site to any of the other species found in the remaining sites. After each re-assignment of species identities, the multivariate method at stake is run and a parameter of interest is calculated. Consequently, one can estimate a range of plausible values for the parameter of interest under different scenarios of re-assigned species identities. We demonstrate the use of our approach in the calculation of two parameters with an example involving tropical tree species from western Amazonia: 1) the Mantel correlation between compositional similarity and environmental distances between pairs of sites, and; 2) the variance explained by environmental predictors in redundancy analysis (RDA). We also investigated the effects of increasing taxonomic uncertainty (i.e. number of unidentified species), and the taxonomic resolution at which morphospecies are determined (genus-resolution, family-resolution, or fully undetermined species) on the uncertainty range of these parameters. To achieve this, we performed simulations on a tree dataset from southern Mexico by randomly selecting a portion of the species contained in the dataset and classifying them as unidentified at each level of decreasing taxonomic resolution. An analysis of covariance showed that both taxonomic uncertainty and resolution significantly influence the uncertainty range of the resulting parameters. Increasing taxonomic uncertainty expands our uncertainty of the parameters estimated both in the Mantel test and RDA. The effects of increasing taxonomic resolution, however, are not as evident. The method presented in this study improves the traditional approaches to study compositional change in ecological communities by accounting for some of the uncertainty inherent to biological data. We hope that this approach can be routinely used to estimate any parameter of interest obtained from compositional data tables when faced with taxonomic uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finite element hp-adaptivity is a technology that allows for very accurate numerical solutions. When applied to open region problems such as radar cross section prediction or antenna analysis, a mesh truncation method needs to be used. This paper compares the following mesh truncation methods in the context of hp-adaptive methods: Infinite Elements, Perfectly Matched Layers and an iterative boundary element based methodology. These methods have been selected because they are exact at the continuous level (a desirable feature required by the extreme accuracy delivered by the hp-adaptive strategy) and they are easy to integrate with the logic of hp-adaptivity. The comparison is mainly based on the number of degrees of freedom needed for each method to achieve a given level of accuracy. Computational times are also included. Two-dimensional examples are used, but the conclusions directly extrapolated to the three dimensional case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo (MC) method can accurately compute the dose produced by medical linear accelerators. However, these calculations require a reliable description of the electron and/or photon beams delivering the dose, the phase space (PHSP), which is not usually available. A method to derive a phase space model from reference measurements that does not heavily rely on a detailed model of the accelerator head is presented. The iterative optimization process extracts the characteristics of the particle beams which best explains the reference dose measurements in water and air, given a set of constrains

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electric probes are objects immersed in the plasma with sharp boundaries which collect of emit charged particles. Consequently, the nearby plasma evolves under abrupt imposed and/or naturally emerging conditions. There could be localized currents, different time scales for plasma species evolution, charge separation and absorbing-emitting walls. The traditional numerical schemes based on differences often transform these disparate boundary conditions into computational singularities. This is the case of models using advection-diffusion differential equations with source-sink terms (also called Fokker-Planck equations). These equations are used in both, fluid and kinetic descriptions, to obtain the distribution functions or the density for each plasma species close to the boundaries. We present a resolution method grounded on an integral advancing scheme by using approximate Green's functions, also called short-time propagators. All the integrals, as a path integration process, are numerically calculated, what states a robust grid-free computational integral method, which is unconditionally stable for any time step. Hence, the sharp boundary conditions, as the current emission from a wall, can be treated during the short-time regime providing solutions that works as if they were known for each time step analytically. The form of the propagator (typically a multivariate Gaussian) is not unique and it can be adjusted during the advancing scheme to preserve the conserved quantities of the problem. The effects of the electric or magnetic fields can be incorporated into the iterative algorithm. The method allows smooth transitions of the evolving solutions even when abrupt discontinuities are present. In this work it is proposed a procedure to incorporate, for the very first time, the boundary conditions in the numerical integral scheme. This numerical scheme is applied to model the plasma bulk interaction with a charge-emitting electrode, dealing with fluid diffusion equations combined with Poisson equation self-consistently. It has been checked the stability of this computational method under any number of iterations, even for advancing in time electrons and ions having different time scales. This work establishes the basis to deal in future work with problems related to plasma thrusters or emissive probes in electromagnetic fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a series of calculation procedures for computer design of ternary distillation columns overcoming the iterative equilibrium calculations necessary in these kind of problems and, thus, reducing the calculation time. The proposed procedures include interpolation and intersection methods to solve the equilibrium equations and the mass and energy balances. The calculation programs proposed also include the possibility of rigorous solution of mass and energy balances and equilibrium relations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, there is an increasing number of robotic applications that need to act in real three-dimensional (3D) scenarios. In this paper we present a new mobile robotics orientated 3D registration method that improves previous Iterative Closest Points based solutions both in speed and accuracy. As an initial step, we perform a low cost computational method to obtain descriptions for 3D scenes planar surfaces. Then, from these descriptions we apply a force system in order to compute accurately and efficiently a six degrees of freedom egomotion. We describe the basis of our approach and demonstrate its validity with several experiments using different kinds of 3D sensors and different 3D real environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an efficient and robust method for the calculation of all S matrix elements (elastic, inelastic, and reactive) over an arbitrary energy range from a single real-symmetric Lanczos recursion. Our new method transforms the fundamental equations associated with Light's artificial boundary inhomogeneity approach [J. Chem. Phys. 102, 3262 (1995)] from the primary representation (original grid or basis representation of the Hamiltonian or its function) into a single tridiagonal Lanczos representation, thereby affording an iterative version of the original algorithm with greatly superior scaling properties. The method has important advantages over existing iterative quantum dynamical scattering methods: (a) the numerically intensive matrix propagation proceeds with real symmetric algebra, which is inherently more stable than its complex symmetric counterpart; (b) no complex absorbing potential or real damping operator is required, saving much of the exterior grid space which is commonly needed to support these operators and also removing the associated parameter dependence. Test calculations are presented for the collinear H+H-2 reaction, revealing excellent performance characteristics. (C) 2004 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bound and resonance states of HO2 have been calculated by both the complex Lanczos homogeneous filter diagonalisation (LHFD) method(1,2) and the real Chebyshev filter diagonalisation method(3,4) for non-zero total angular momentum J = 4 and 5. For bound states, the agreement between the two methods is quite satisfactory; for resonances while the energies are in good agreement, the widths are only in general agreement. The relative performances of the two iterative FD methods have also been discussed in terms of efficiency as well as convergence behaviour for such a computationally challenging problem. A helicity quantum number Ohm assignment (within the helicity conserving approximation) is performed and the results indicate that Coriolis coupling becomes more important as J increases and the helicity conserving approximation is not a good one for the HO2 resonance states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The buffer allocation problem (BAP) is a well-known difficult problem in the design of production lines. We present a stochastic algorithm for solving the BAP, based on the cross-entropy method, a new paradigm for stochastic optimization. The algorithm involves the following iterative steps: (a) the generation of buffer allocations according to a certain random mechanism, followed by (b) the modification of this mechanism on the basis of cross-entropy minimization. Through various numerical experiments we demonstrate the efficiency of the proposed algorithm and show that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Iterative multiuser joint decoding based on exact Belief Propagation (BP) is analyzed in the large system limit by means of the replica method. It is shown that performance can be improved by appropriate power assignment to the users. The optimum power assignment can be found by linear programming in most technically relevant cases. The performance of BP iterative multiuser joint decoding is compared to suboptimum approximations based on Interference Cancellation (IC). While IC receivers show a significant loss for equal-power users, they yield performance close to BP under optimum power assignment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's market, the global competition has put manufacturing businesses in great pressures to respond rapidly to dynamic variations in demand patterns across products and changing product mixes. To achieve substantial responsiveness, the manufacturing activities associated with production planning and control must be integrated dynamically, efficiently and cost-effectively. This paper presents an iterative agent bidding mechanism, which performs dynamic integration of process planning and production scheduling to generate optimised process plans and schedules in response to dynamic changes in the market and production environment. The iterative bidding procedure is carried out based on currency-like metrics in which all operations (e.g. machining processes) to be performed are assigned with virtual currency values, and resource agents bid for the operations if the costs incurred for performing them are lower than the currency values. The currency values are adjusted iteratively and resource agents re-bid for the operations based on the new set of currency values until the total production cost is minimised. A simulated annealing optimisation technique is employed to optimise the currency values iteratively. The feasibility of the proposed methodology has been validated using a test case and results obtained have proven the method outperforming non-agent-based methods.