9 resultados para Travel Cost Method
em Universidad de Alicante
Resumo:
In this paper we describe an hybrid algorithm for an even number of processors based on an algorithm for two processors and the Overlapping Partition Method for tridiagonal systems. Moreover, we compare this hybrid method with the Partition Wang’s method in a BSP computer. Finally, we compare the theoretical computation cost of both methods for a Cray T3D computer, using the cost model that BSP model provides.
Resumo:
In this paper we present a study of the computational cost of the GNG3D algorithm for mesh optimization. This algorithm has been implemented taking as a basis a new method which is based on neural networks and consists on two differentiated phases: an optimization phase and a reconstruction phase. The optimization phase is developed applying an optimization algorithm based on the Growing Neural Gas model, which constitutes an unsupervised incremental clustering algorithm. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The computational cost of both phases is calculated, showing some examples.
Resumo:
Several recent works deal with 3D data in mobile robotic problems, e.g., mapping. Data comes from any kind of sensor (time of flight, Kinect or 3D lasers) that provide a huge amount of unorganized 3D data. In this paper we detail an efficient approach to build complete 3D models using a soft computing method, the Growing Neural Gas (GNG). As neural models deal easily with noise, imprecision, uncertainty or partial data, GNG provides better results than other approaches. The GNG obtained is then applied to a sequence. We present a comprehensive study on GNG parameters to ensure the best result at the lowest time cost. From this GNG structure, we propose to calculate planar patches and thus obtaining a fast method to compute the movement performed by a mobile robot by means of a 3D models registration algorithm. Final results of 3D mapping are also shown.
Resumo:
Nowadays, there is an increasing number of robotic applications that need to act in real three-dimensional (3D) scenarios. In this paper we present a new mobile robotics orientated 3D registration method that improves previous Iterative Closest Points based solutions both in speed and accuracy. As an initial step, we perform a low cost computational method to obtain descriptions for 3D scenes planar surfaces. Then, from these descriptions we apply a force system in order to compute accurately and efficiently a six degrees of freedom egomotion. We describe the basis of our approach and demonstrate its validity with several experiments using different kinds of 3D sensors and different 3D real environments.
Resumo:
Preparation of homogeneous CNT coatings in insulating silica capillary tubes is carried out by an innovative electrochemically-assisted method in which the driving force for the deposition is the change in pH inside the confined space between the inner electrode and the capillary walls. This method represents a great advancement in the development of CNT coatings following a simple, cost-effective methodology.
Resumo:
There are a large number of image processing applications that work with different performance requirements and available resources. Recent advances in image compression focus on reducing image size and processing time, but offer no real-time solutions for providing time/quality flexibility of the resulting image, such as using them to transmit the image contents of web pages. In this paper we propose a method for encoding still images based on the JPEG standard that allows the compression/decompression time cost and image quality to be adjusted to the needs of each application and to the bandwidth conditions of the network. The real-time control is based on a collection of adjustable parameters relating both to aspects of implementation and to the hardware with which the algorithm is processed. The proposed encoding system is evaluated in terms of compression ratio, processing delay and quality of the compressed image when compared with the standard method.
Resumo:
This study seeks to analyse the price determination of low cost airlines in Europe and the effect that Internet has on this strategy. The outcomes obtained reveal that both users and companies benefit from the use of ICTs in the purchase and sale of airline tickets: the Internet allows consumers to increase their bargaining power comparing different airlines and choosing the most competitive flight, while companies can easily check the behaviour of users to adapt their pricing strategies using internal information. More than 2500 flights of the largest European low cost airlines have been used to carry out the study. The study revealed that the most significant variables for understanding pricing strategies were the number of rivals, the behaviour of the demand and the associated costs. The results indicated that consumers should buy their tickets before 25 days prior to departure.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.