4 resultados para Euclidean isometry

em Universidad de Alicante


Relevância:

10.00% 10.00%

Publicador:

Resumo:

From a set of gonioapparent automotive samples from different manufacturers we selected 28 low-chroma color pairs with relatively small color differences predominantly in lightness. These color pairs were visually assessed with a gray scale at six different viewing angles by a panel of 10 observers. Using the Standardized Residual Sum of Squares (STRESS) index, the results of our visual experiment were tested against predictions made by 12 modern color-difference formulas. From a weighted STRESS index accounting for the uncertainty in visual assessments, the best prediction of our whole experiment was achieved using AUDI2000, CAM02-SCD, CAM02-UCS and OSA-GP-Euclidean color-difference formulas, which were no statistically significant different among them. A two-step optimization of the original AUDI2000 color-difference formula resulted in a modified AUDI2000 formula which performed both, significantly better than the original formula and below the experimental inter-observer variability. Nevertheless the proposal of a new revised AUDI2000 color-difference formula requires additional experimental data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we deal with parameterized linear inequality systems in the n-dimensional Euclidean space, whose coefficients depend continuosly on an index ranging in a compact Hausdorff space. The paper is developed in two different parametric settings: the one of only right-hand-side perturbations of the linear system, and that in which both sides of the system can be perturbed. Appealing to the backgrounds on the calmness property, and exploiting the specifics of the current linear structure, we derive different characterizations of the calmness of the feasible set mapping, and provide an operative expresion for the calmness modulus when confined to finite systems. In the paper, the role played by the Abadie constraint qualification in relation to calmness is clarified, and illustrated by different examples. We point out that this approach has the virtue of tackling the calmness property exclusively in terms of the system’s data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.