992 resultados para iterative method
Resumo:
This paper presents a series of calculation procedures for computer design of ternary distillation columns overcoming the iterative equilibrium calculations necessary in these kind of problems and, thus, reducing the calculation time. The proposed procedures include interpolation and intersection methods to solve the equilibrium equations and the mass and energy balances. The calculation programs proposed also include the possibility of rigorous solution of mass and energy balances and equilibrium relations.
Resumo:
Nowadays, there is an increasing number of robotic applications that need to act in real three-dimensional (3D) scenarios. In this paper we present a new mobile robotics orientated 3D registration method that improves previous Iterative Closest Points based solutions both in speed and accuracy. As an initial step, we perform a low cost computational method to obtain descriptions for 3D scenes planar surfaces. Then, from these descriptions we apply a force system in order to compute accurately and efficiently a six degrees of freedom egomotion. We describe the basis of our approach and demonstrate its validity with several experiments using different kinds of 3D sensors and different 3D real environments.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
We present an efficient and robust method for the calculation of all S matrix elements (elastic, inelastic, and reactive) over an arbitrary energy range from a single real-symmetric Lanczos recursion. Our new method transforms the fundamental equations associated with Light's artificial boundary inhomogeneity approach [J. Chem. Phys. 102, 3262 (1995)] from the primary representation (original grid or basis representation of the Hamiltonian or its function) into a single tridiagonal Lanczos representation, thereby affording an iterative version of the original algorithm with greatly superior scaling properties. The method has important advantages over existing iterative quantum dynamical scattering methods: (a) the numerically intensive matrix propagation proceeds with real symmetric algebra, which is inherently more stable than its complex symmetric counterpart; (b) no complex absorbing potential or real damping operator is required, saving much of the exterior grid space which is commonly needed to support these operators and also removing the associated parameter dependence. Test calculations are presented for the collinear H+H-2 reaction, revealing excellent performance characteristics. (C) 2004 American Institute of Physics.
Resumo:
Bound and resonance states of HO2 have been calculated by both the complex Lanczos homogeneous filter diagonalisation (LHFD) method(1,2) and the real Chebyshev filter diagonalisation method(3,4) for non-zero total angular momentum J = 4 and 5. For bound states, the agreement between the two methods is quite satisfactory; for resonances while the energies are in good agreement, the widths are only in general agreement. The relative performances of the two iterative FD methods have also been discussed in terms of efficiency as well as convergence behaviour for such a computationally challenging problem. A helicity quantum number Ohm assignment (within the helicity conserving approximation) is performed and the results indicate that Coriolis coupling becomes more important as J increases and the helicity conserving approximation is not a good one for the HO2 resonance states.
Resumo:
The buffer allocation problem (BAP) is a well-known difficult problem in the design of production lines. We present a stochastic algorithm for solving the BAP, based on the cross-entropy method, a new paradigm for stochastic optimization. The algorithm involves the following iterative steps: (a) the generation of buffer allocations according to a certain random mechanism, followed by (b) the modification of this mechanism on the basis of cross-entropy minimization. Through various numerical experiments we demonstrate the efficiency of the proposed algorithm and show that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
Iterative multiuser joint decoding based on exact Belief Propagation (BP) is analyzed in the large system limit by means of the replica method. It is shown that performance can be improved by appropriate power assignment to the users. The optimum power assignment can be found by linear programming in most technically relevant cases. The performance of BP iterative multiuser joint decoding is compared to suboptimum approximations based on Interference Cancellation (IC). While IC receivers show a significant loss for equal-power users, they yield performance close to BP under optimum power assignment.
Resumo:
In today's market, the global competition has put manufacturing businesses in great pressures to respond rapidly to dynamic variations in demand patterns across products and changing product mixes. To achieve substantial responsiveness, the manufacturing activities associated with production planning and control must be integrated dynamically, efficiently and cost-effectively. This paper presents an iterative agent bidding mechanism, which performs dynamic integration of process planning and production scheduling to generate optimised process plans and schedules in response to dynamic changes in the market and production environment. The iterative bidding procedure is carried out based on currency-like metrics in which all operations (e.g. machining processes) to be performed are assigned with virtual currency values, and resource agents bid for the operations if the costs incurred for performing them are lower than the currency values. The currency values are adjusted iteratively and resource agents re-bid for the operations based on the new set of currency values until the total production cost is minimised. A simulated annealing optimisation technique is employed to optimise the currency values iteratively. The feasibility of the proposed methodology has been validated using a test case and results obtained have proven the method outperforming non-agent-based methods.
Resumo:
Nearest feature line-based subspace analysis is first proposed in this paper. Compared with conventional methods, the newly proposed one brings better generalization performance and incremental analysis. The projection point and feature line distance are expressed as a function of a subspace, which is obtained by minimizing the mean square feature line distance. Moreover, by adopting stochastic approximation rule to minimize the objective function in a gradient manner, the new method can be performed in an incremental mode, which makes it working well upon future data. Experimental results on the FERET face database and the UCI satellite image database demonstrate the effectiveness.
Resumo:
We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.
Resumo:
We propose two algorithms involving the relaxation of either the given Dirichlet data or the prescribed Neumann data on the over-specified boundary, in the case of the alternating iterative algorithm of ` 12 ` 12 `$12 `&12 `#12 `^12 `_12 `%12 `~12 *Kozlov91 applied to Cauchy problems for the modified Helmholtz equation. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed methods.
Resumo:
In this paper, three iterative procedures (Landweber-Fridman, conjugate gradient and minimal error methods) for obtaining a stable solution to the Cauchy problem in slow viscous flows are presented and compared. A section is devoted to the numerical investigations of these algorithms. There, we use the boundary element method together with efficient stopping criteria for ceasing the iteration process in order to obtain stable solutions.
Resumo:
The inverse problem of determining a spacewise-dependent heat source for the parabolic heat equation using the usual conditions of the direct problem and information from one supplementary temperature measurement at a given instant of time is studied. This spacewise-dependent temperature measurement ensures that this inverse problem has a unique solution, but the solution is unstable and hence the problem is ill-posed. We propose a variational conjugate gradient-type iterative algorithm for the stable reconstruction of the heat source based on a sequence of well-posed direct problems for the parabolic heat equation which are solved at each iteration step using the boundary element method. The instability is overcome by stopping the iterative procedure at the first iteration for which the discrepancy principle is satisfied. Numerical results are presented which have the input measured data perturbed by increasing amounts of random noise. The numerical results show that the proposed procedure yields stable and accurate numerical approximations after only a few iterations.
Resumo:
The problem considered is that of determining the fluid velocity for linear hydrostatics Stokes flow of slow viscous fluids from measured velocity and fluid stress force on a part of the boundary of a bounded domain. A variational conjugate gradient iterative procedure is proposed based on solving a series of mixed well-posed boundary value problems for the Stokes operator and its adjoint. In order to stabilize the Cauchy problem, the iterations are ceased according to an optimal order discrepancy principle stopping criterion. Numerical results obtained using the boundary element method confirm that the procedure produces a convergent and stable numerical solution.