7 resultados para Interior point methods

em Universidad de Alicante


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Software for video-based multi-point frequency measuring and mapping: http://hdl.handle.net/10045/53429

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the stabilisation of low softening point pitch fibres obtained from petroleum pitches using HNO3 as oxidising agent. This method presents some advantages compared with conventional methods: pitches with low softening point (SP) can be used to prepare carbon fibres (CF), the stabilisation time has been reduced, the CF yields are similar to those obtained after general methods of stabilisation, and the initial treatments to increase SP when low SP pitches are used to prepare CF, are avoided. The parent pitches were characterised by different techniques such as diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS), elemental analysis and solvent extraction with toluene and quinoline. The interaction between HNO3 and the pitch fibres, as well as the changes occurring during the heat treatment, have been followed by DRIFTS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method using iodine has been developed for the stabilisation of low softening point (SP) pitch fibres that avoids air stabilisation in the production of carbon fibres (CF). The interaction between iodine and petroleum pitches has been studied by following the changes in the hydrogen content, aromatic or aliphatic, during the heat treatment of iodine-treated pitch fibres. Two low SP petroleum pitches were used and the iodine-treated pitch fibres were analysed by TGA, DSC, DRIFT, XPS and SEM. The results confirm that using this novel method pitches with low SP can be used to prepare CF with two advantages, compared with conventional methods. The stabilisation time is considerably reduced and treatments to increase the SP, usually required when low SP pitches are used to prepare CF, can be avoided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible. A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complementary programs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which are inherently noisy. In addition, time constraints have to be considered as these sensors are capable of providing a 3D data stream in real time. This thesis proposed the use of Self-Organizing Maps (SOMs) as a 3D representation model. In particular, we proposed the use of the Growing Neural Gas (GNG) network, which has been successfully used for clustering, pattern recognition and topology representation of multi-dimensional data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models, without considering time constraints. It is proposed a hardware implementation leveraging the computing power of modern GPUs, which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). The proposed methods were applied to different problem and applications in the area of computer vision such as the recognition and localization of objects, visual surveillance or 3D reconstruction.