16 resultados para Point interpolation method

em Universidad de Alicante


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which are inherently noisy. In addition, time constraints have to be considered as these sensors are capable of providing a 3D data stream in real time. This thesis proposed the use of Self-Organizing Maps (SOMs) as a 3D representation model. In particular, we proposed the use of the Growing Neural Gas (GNG) network, which has been successfully used for clustering, pattern recognition and topology representation of multi-dimensional data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models, without considering time constraints. It is proposed a hardware implementation leveraging the computing power of modern GPUs, which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). The proposed methods were applied to different problem and applications in the area of computer vision such as the recognition and localization of objects, visual surveillance or 3D reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the stabilisation of low softening point pitch fibres obtained from petroleum pitches using HNO3 as oxidising agent. This method presents some advantages compared with conventional methods: pitches with low softening point (SP) can be used to prepare carbon fibres (CF), the stabilisation time has been reduced, the CF yields are similar to those obtained after general methods of stabilisation, and the initial treatments to increase SP when low SP pitches are used to prepare CF, are avoided. The parent pitches were characterised by different techniques such as diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS), elemental analysis and solvent extraction with toluene and quinoline. The interaction between HNO3 and the pitch fibres, as well as the changes occurring during the heat treatment, have been followed by DRIFTS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method using iodine has been developed for the stabilisation of low softening point (SP) pitch fibres that avoids air stabilisation in the production of carbon fibres (CF). The interaction between iodine and petroleum pitches has been studied by following the changes in the hydrogen content, aromatic or aliphatic, during the heat treatment of iodine-treated pitch fibres. Two low SP petroleum pitches were used and the iodine-treated pitch fibres were analysed by TGA, DSC, DRIFT, XPS and SEM. The results confirm that using this novel method pitches with low SP can be used to prepare CF with two advantages, compared with conventional methods. The stabilisation time is considerably reduced and treatments to increase the SP, usually required when low SP pitches are used to prepare CF, can be avoided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a series of calculation procedures for computer design of ternary distillation columns overcoming the iterative equilibrium calculations necessary in these kind of problems and, thus, reducing the calculation time. The proposed procedures include interpolation and intersection methods to solve the equilibrium equations and the mass and energy balances. The calculation programs proposed also include the possibility of rigorous solution of mass and energy balances and equilibrium relations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Array measurements have become a valuable tool for site response characterization in a non-invasive way. The array design, i.e. size, geometry and number of stations, has a great influence in the quality of the obtained results. From the previous parameters, the number of available stations uses to be the main limitation for the field experiments, because of the economical and logistical constraints that it involves. Sometimes, from the initially planned array layout, carefully designed before the fieldwork campaign, one or more stations do not work properly, modifying the prearranged geometry. Whereas other times, there is not possible to set up the desired array layout, because of the lack of stations. Therefore, for a planned array layout, the number of operative stations and their arrangement in the array become a crucial point in the acquisition stage and subsequently in the dispersion curve estimation. In this paper we carry out an experimental work to analyze which is the minimum number of stations that would provide reliable dispersion curves for three prearranged array configurations (triangular, circular with central station and polygonal geometries). For the optimization study, we analyze together the theoretical array responses and the experimental dispersion curves obtained through the f-k method. In the case of the f-k method, we compare the dispersion curves obtained for the original or prearranged arrays with the ones obtained for the modified arrays, i.e. the dispersion curves obtained when a certain number of stations n is removed, each time, from the original layout of X geophones. The comparison is evaluated by means of a misfit function, which helps us to determine how constrained are the studied geometries by stations removing and which station or combination of stations affect more to the array capability when they are not available. All this information might be crucial to improve future array designs, determining when it is possible to optimize the number of arranged stations without losing the reliability of the obtained results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rock mass characterization requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in Light Detection and Ranging (LiDAR) instrumentation currently allow quick and accurate 3D data acquisition, yielding on the development of new methodologies for the automatic characterization of rock mass discontinuities. This paper presents a methodology for the identification and analysis of flat surfaces outcropping in a rocky slope using the 3D data obtained with LiDAR. This method identifies and defines the algebraic equations of the different planes of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test, finding principal orientations by Kernel Density Estimation and identifying clusters by the Density-Based Scan Algorithm with Noise. Different sources of information —synthetic and 3D scanned data— were employed, performing a complete sensitivity analysis of the parameters in order to identify the optimal value of the variables of the proposed method. In addition, raw source files and obtained results are freely provided in order to allow to a more straightforward method comparison aiming to a more reproducible research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a method to interpolate a periodic band-limited signal from its samples lying at nonuniform positions in a regular grid, which is based on the FFT and has the same complexity order as this last algorithm. This kind of interpolation is usually termed “the missing samples problem” in the literature, and there exists a wide variety of iterative and direct methods for its solution. The one presented in this paper is a direct method that exploits the properties of the so-called erasure polynomial and provides a significant improvement on the most efficient method in the literature, which seems to be the burst error recovery (BER) technique of Marvasti’s The paper includes numerical assessments of the method’s stability and complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complete characterization of rock masses implies the acquisition of information of both, the materials which compose the rock mass and the discontinuities which divide the outcrop. Recent advances in the use of remote sensing techniques – such as Light Detection and Ranging (LiDAR) – allow the accurate and dense acquisition of 3D information that can be used for the characterization of discontinuities. This work presents a novel methodology which allows the calculation of the normal spacing of persistent and non-persistent discontinuity sets using 3D point cloud datasets considering the three dimensional relationships between clusters. This approach requires that the 3D dataset has been previously classified. This implies that discontinuity sets are previously extracted, every single point is labeled with its corresponding discontinuity set and every exposed planar surface is analytically calculated. Then, for each discontinuity set the method calculates the normal spacing between an exposed plane and its nearest one considering 3D space relationship. This link between planes is obtained calculating for every point its nearest point member of the same discontinuity set, which provides its nearest plane. This allows calculating the normal spacing for every plane. Finally, the normal spacing is calculated as the mean value of all the normal spacings for each discontinuity set. The methodology is validated through three cases of study using synthetic data and 3D laser scanning datasets. The first case illustrates the fundamentals and the performance of the proposed methodology. The second and the third cases of study correspond to two rock slopes for which datasets were acquired using a 3D laser scanner. The second case study has shown that results obtained from the traditional and the proposed approaches are reasonably similar. Nevertheless, a discrepancy between both approaches has been found when the exposed planes members of a discontinuity set were hard to identify and when the planes pairing was difficult to establish during the fieldwork campaign. The third case study also has evidenced that when the number of identified exposed planes is high, the calculated normal spacing using the proposed approach is minor than those using the traditional approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Superstructure approaches are the solution to the difficult problem which involves the rigorous economic design of a distillation column. These methods require complex initialization procedures and they are hard to solve. For this reason, these methods have not been extensively used. In this work, we present a methodology for the rigorous optimization of chemical processes implemented on a commercial simulator using surrogate models based on a kriging interpolation. Several examples were studied, but in this paper, we perform the optimization of a superstructure for a non-sharp separation to show the efficiency and effectiveness of the method. Noteworthy that it is possible to get surrogate models accurate enough with up to seven degrees of freedom.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complementary programs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software for video-based multi-point frequency measuring and mapping: http://hdl.handle.net/10045/53429

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.