995 resultados para Iterative Closest Point
Resumo:
The research on language equations has been active during last decades. Compared to the equations on words the equations on languages are much more difficult to solve. Even very simple equations that are easy to solve for words can be very hard for languages. In this thesis we study two of such equations, namely commutation and conjugacy equations. We study these equations on some limited special cases and compare some of these results to the solutions of corresponding equations on words. For both equations we study the maximal solutions, the centralizer and the conjugator. We present a fixed point method that we can use to search these maximal solutions and analyze the reasons why this method is not successful for all languages. We give also several examples to illustrate the behaviour of this method.
Resumo:
The goal of this thesis is to implement software for creating 3D models from point clouds. Point clouds are acquired with stereo cameras, monocular systems or laser scanners. The created 3D models are triangular models or NURBS (Non-Uniform Rational B-Splines) models. Triangular models are constructed from selected areas from the point clouds and resulted triangular models are translated into a set of quads. The quads are further translated into an estimated grid structure and used for NURBS surface approximation. Finally, we have a set of NURBS surfaces which represent the whole model. The problem wasn’t so easy to solve. The selected triangular surface reconstruction algorithm did not deal well with noise in point clouds. To handle this problem, a clustering method is introduced for simplificating the model and removing noise. As we had better results with the smaller point clouds produced by clustering, we used points in clusters to better estimate the grids for NURBS models. The overall results were good when the point cloud did not have much noise. The point clouds with small amount of error had good results as the triangular model was solid. NURBS surface reconstruction performed well on solid models.
Resumo:
Emission trading with greenhouse gases and green certificates are part if the climate policy the main target of which is reduce greenhouse gas emissions. The carbon dioxide and fine particle emissions of energy production in Helsinki Metropolitan area are calculated in this study. The analysis is made mainly by district heating point of view and the changes of the district heating network are assessed. Carbon dioxide emissions would be a bit higher, if the district heating network is expanded, but then the fine particle emissions would be much lower. Carbon dioxide emissions are roughly 10 % higher, if the district heating network is expanded at same rate as it has in past five years in the year 2030. The expansion of district heating network would decrease the fine particle emissions about 40 %. The cost of the expansion is allocated to be reduction cost of the fine particle emissions, which is considerably higher than the traditional reduction methods costs. The possible new nuclear plant would reduce the emissions considerably and the costs of the nuclear plant would be relatively low comparing the other energy production methods.
Resumo:
In this paper we address the problem of extracting representative point samples from polygonal models. The goal of such a sampling algorithm is to find points that are evenly distributed. We propose star-discrepancy as a measure for sampling quality and propose new sampling methods based on global line distributions. We investigate several line generation algorithms including an efficient hardware-based sampling method. Our method contributes to the area of point-based graphics by extracting points that are more evenly distributed than by sampling with current algorithms
Resumo:
We introduce a method for surface reconstruction from point sets that is able to cope with noise and outliers. First, a splat-based representation is computed from the point set. A robust local 3D RANSAC-based procedure is used to filter the point set for outliers, then a local jet surface - a low-degree surface approximation - is fitted to the inliers. Second, we extract the reconstructed surface in the form of a surface triangle mesh through Delaunay refinement. The Delaunay refinement meshing approach requires computing intersections between line segment queries and the surface to be meshed. In the present case, intersection queries are solved from the set of splats through a 1D RANSAC procedure
Resumo:
A simple cloud point extraction procedure is presented for the preconcentration of copper in various samples. After complexation by 4-hydroxy-2-mercapto-6-propylpyrimidine (PTU), copper ions are quantitatively extracted into the phase rich in Triton X-114 after centrifugation. Methanol acidified with 0.5 mol L-1 HNO3 was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). Analytical parameters including concentrations for PTU, Triton X-114 and HNO3, bath temperature, centrifugation rate and time were optimized. The influences of the matrix ions on the recoveries of copper ions were investigated. The detection limits (3SDb/m, n=4) of 1.6 ng mL-1 along with enrichment factors of 30 for Cu were achieved. The proposed procedure was applied to the analysis of environmental samples.
Resumo:
Työssä tutkitaan kaupallisten lyhyen kantaman radiotekniikoiden ominaisuuksia sekä soveltuvuutta metallisorvin langattomaan ohjaukseen. Ominaisuuksien perusteella valitaan parhaiten sovellukseen käyvä radiotekniikka.
Resumo:
Työssä vertaillaan kaupallisia lyhyen kantaman radiotekniikoita. Vertailujen pohjalta valitaan parhaiten sovelluskohteeseen soveltuva radiotekniikka.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.
Resumo:
A simple, sensitive and selective cloud point extraction procedure is described for the preconcentration and atomic absorption spectrometric determination of Zn2+ and Cd2+ ions in water and biological samples, after complexation with 3,3',3",3'"-tetraindolyl (terephthaloyl) dimethane (TTDM) in basic medium, using Triton X-114 as nonionic surfactant. Detection limits of 3.0 and 2.0 µg L-1 and quantification limits 10.0 and 7.0 µg L-1were obtained for Zn2+ and Cd2+ ions, respectively. Relative standard deviation was 2.9 and 3.3, and enrichment factors 23.9 and 25.6, for Zn2+ and Cd2+ ions, respectively. The method enabled determination of low levels of Zn2+ and Cd2+ ions in urine, blood serum and water samples.