537 resultados para Monotone Iterations
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
This paper describes the design, tuning, and extensive field testing of an admittance-based Autonomous Loading Controller (ALC) for robotic excavation. Several iterations of the ALC were tuned and tested in fragmented rock piles—similar to those found in operating mines—by using both a robotic 1-tonne capacity Kubota R520S diesel-hydraulic surface loader and a 14-tonne capacity Atlas Copco ST14 underground load-haul-dump (LHD) machine. On the R520S loader, the ALC increased payload by 18 % with greater consistency, although with more energy expended and longer dig times when compared with digging at maximum actuator velocity. On the ST14 LHD, the ALC took 61 % less time to load 39 % more payload when compared to a single manual operator. The manual operator made 28 dig attempts by using three different digging strategies, and had one failed dig. The tuned ALC made 26 dig attempts at 10 and 11 MN target force levels. All 10 11 MN digs succeeded while 6 of the 16 10 MN digs failed. The results presented in this paper suggest that the admittance-based ALC is more productive and consistent than manual operators, but that care should be taken when detecting entry into the muck pile
Resumo:
'The Resonance of Unseen Things: Power, Poetics, Captivity and UFOs in the American Uncanny' offers an ethnographic meditation on the “uncanny” persistence and cultural freight of conspiracy theory. The project is a reading of conspiracy theory as an index of a certain strain of late-20th century American despondency/malaise, especially as experienced by people experiencing downward social mobility. Written by a cultural anthropologist with a literary background, this is a deeply interdisciplinary project that focuses on the enduring American preoccupation with captivity in a rapidly transforming world. Captivity is a trope that appears in both ordinary and fantastic iterations here, and this book shows how multiple troubled histories—of race, class, gender and power—become compressed into stories of uncanny memory.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The concept of a monotone family of functions, which need not be countable, and the solution of an equilibrium problem associated with the family are introduced. A fixed-point theorem is applied to prove the existence of solutions to the problem.
Resumo:
The diagrammatic strong-coupling perturbation theory (SCPT) for correlated electron systems is developed for intersite Coulomb interaction and for a nonorthogonal basis set. The construction is based on iterations of exact closed equations for many - electron Green functions (GFs) for Hubbard operators in terms of functional derivatives with respect to external sources. The graphs, which do not contain the contributions from the fluctuations of the local population numbers of the ion states, play a special role: a one-to-one correspondence is found between the subset of such graphs for the many - electron GFs and the complete set of Feynman graphs of weak-coupling perturbation theory (WCPT) for single-electron GFs. This fact is used for formulation of the approximation of renormalized Fermions (ARF) in which the many-electron quasi-particles behave analogously to normal Fermions. Then, by analyzing: (a) Sham's equation, which connects the self-energy and the exchange- correlation potential in density functional theory (DFT); and (b) the Galitskii and Migdal expressions for the total energy, written within WCPT and within ARF SCPT, a way we suggest a method to improve the description of the systems with correlated electrons within the local density approximation (LDA) to DFT. The formulation, in terms of renormalized Fermions LIDA (RF LDA), is obtained by introducing the spectral weights of the many electron GFs into the definitions of the charge density, the overlap matrices, effective mixing and hopping matrix elements, into existing electronic structure codes, whereas the weights themselves have to be found from an additional set of equations. Compared with LDA+U and self-interaction correction (SIC) methods, RF LDA has the advantage of taking into account the transfer of spectral weights, and, when formulated in terms of GFs, also allows for consideration of excitations and nonzero temperature. Going beyond the ARF SCPT, as well as RF LIDA, and taking into account the fluctuations of ion population numbers would require writing completely new codes for ab initio calculations. The application of RF LDA for ab initio band structure calculations for rare earth metals is presented in part 11 of this study (this issue). (c) 2005 Wiley Periodicals, Inc.
Resumo:
In this paper, we consider a class of parametric implicit vector equilibrium problems in Hausdorff topological vector spaces where a mapping f and a set K are perturbed by parameters is an element of and lambda respectively. We establish sufficient conditions for the upper semicontinuity and lower semicontinuity of the solution set mapping S : Lambda(1) x A(2) -> 2(X) for such parametric implicit vector equilibrium problems. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We explore the implications of refinements in the mechanical description of planetary constituents on the convection modes predicted by finite-element simulations. The refinements consist in the inclusion of incremental elasticity, plasticity (yielding) and multiple simultaneous creep mechanisms in addition to the usual visco-plastic models employed in the context of unified plate-mantle models. The main emphasis of this paper rests on the constitutive and computational formulation of the model. We apply a consistent incremental formulation of the non-linear governing equations avoiding the computationally expensive iterations that are otherwise necessary to handle the onset of plastic yield. In connection with episodic convection simulations, we point out the strong dependency of the results on the choice of the initial temperature distribution. Our results also indicate that the inclusion of elasticity in the constitutive relationships lowers the mechanical energy associated with subduction events.
Resumo:
A new passive shim design method is presented which is based on a magnetization mapping approach. Well defined regions with similar magnetization values define the optimal number of passive shims, their shape and position. The new design method is applied in a shimming process without prior-axial shim localization; this reduces the possibility of introducing new errors. The new shim design methodology reduces the number of iterations and the quantity of material required to shim a magnet. Only a few iterations (1-5) are required to shim a whole body horizontal bore magnet with a manufacturing error tolerance larger than 0.1 mm and smaller than 0.5 mm. One numerical example is presented
Resumo:
In multimedia retrieval, a query is typically interactively refined towards the ‘optimal’ answers by exploiting user feedback. However, in existing work, in each iteration, the refined query is re-evaluated. This is not only inefficient but fails to exploit the answers that may be common between iterations. In this paper, we introduce a new approach called SaveRF (Save random accesses in Relevance Feedback) for iterative relevance feedback search. SaveRF predicts the potential candidates for the next iteration and maintains this small set for efficient sequential scan. By doing so, repeated candidate accesses can be saved, hence reducing the number of random accesses. In addition, efficient scan on the overlap before the search starts also tightens the search space with smaller pruning radius. We implemented SaveRF and our experimental study on real life data sets show that it can reduce the I/O cost significantly.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
The stability of internally heated inclined plane parallel shear flows is examined numerically for the case of finite value of the Prandtl number, Pr. The transition in a vertical channel has already been studied for 0≤Pr≤100 with or without the application of an external pressure gradient, where the secondary flow takes the form of travelling waves (TWs) that are spanwise-independent (see works of Nagata and Generalis). In this work, in contrast to work already reported (J. Heat Trans. T. ASME 124 (2002) 635-642), we examine transition where the secondary flow takes the form of longitudinal rolls (LRs), which are independent of the steamwise direction, for Pr=7 and for a specific value of the angle of inclination of the fluid layer without the application of an external pressure gradient. We find possible bifurcation points of the secondary flow by performing a linear stability analysis that determines the neutral curve, where the basic flow, which can have two inflection points, loses stability. The linear stability of the secondary flow against three-dimensional perturbations is also examined numerically for the same value of the angle of inclination by employing Floquet theory. We identify possible bifurcation points for the tertiary flow and show that the bifurcation can be either monotone or oscillatory. © 2003 Académie des sciences. Published by Elsevier SAS. All rights reserved.